WO2003029046A1 - Apparatus and method for sensing the occupancy status of parking spaces in a parking lot - Google Patents

Apparatus and method for sensing the occupancy status of parking spaces in a parking lot Download PDF

Info

Publication number
WO2003029046A1
WO2003029046A1 PCT/US2002/029826 US0229826W WO03029046A1 WO 2003029046 A1 WO2003029046 A1 WO 2003029046A1 US 0229826 W US0229826 W US 0229826W WO 03029046 A1 WO03029046 A1 WO 03029046A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
parking lot
capturing
parking
captured image
Prior art date
Application number
PCT/US2002/029826
Other languages
French (fr)
Inventor
Maryann Winter
Josef Osterweil
Original Assignee
Maryann Winter
Josef Osterweil
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maryann Winter, Josef Osterweil filed Critical Maryann Winter
Priority to US10/490,115 priority Critical patent/US7116246B2/en
Publication of WO2003029046A1 publication Critical patent/WO2003029046A1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas

Definitions

  • the present invention is directed to an apparatus and method for determining the location of available parking spaces and/or unavailable parking spaces in a parking lot (facility).
  • the present invention relates more specifically to an optical apparatus and a method for using the optical apparatus that enables an individual and/or the attending personnel attempting to park a vehicle in the parking lot to determine the location of all unoccupied parking locations in the parking lot.
  • the vehicles in a parking lot are of a large variety of models and sizes.
  • the vehicles are randomly parked in given parking spaces and the correlation between given vehicles and given parking spaces changes regularly.
  • other objects such as, but not limited to, for example, construction equipment and/or supplies, dumpsters, snow plowed into a heap, and delivery crates to be located in a location normally reserved for a vehicle.
  • the images of all parking spaces change as a function of light condition within a 24 hour cycle and from one day to the next. Changes in weather conditions, such as wet pavement or snow cover, will further complicate the occupancy determination and decrease the reliability of such a system.
  • an object of the present invention is to reliably and accurately determine the status of at least one parking space in a parking lot (facility).
  • the present invention is easily installed and operated and is most suitable to large open space or outdoor parking lots.
  • a digital three-dimensional model of a given parking lot is mapped (e.g. an identification procedure is performed) to accurately determine parking space locations where parking spaces are occupied and where parking spaces are not occupied (e.g the status of the parking space) at a predetermined time period.
  • a capture device produces data representing an image of an object.
  • a processing device processes the data to derive a three-dimensional model of the parking lot, which is stored in a database.
  • a reporting device such as, for example, an occupancy display, indicates the parking space availability.
  • the processing device determines a change in at least one specific property by comparing the three-dimensional model with at least one previously derived three-dimensional model stored in the database. It is understood that a synclironized image capture is a substantially concurrent capture of an image. The degree of synchronization of image capture influences the accuracy of the three-dimensional model when changes are introduced at the scene as a function of time. Additionally, the present invention has the capability of providing information that assists in the management of the parking lot such as, but not limited to, for example, adjusting the number of handicapped spaces, based on the need for such parking spaces over time and adjusting the number and adjusting the frequency of shuttle bus service based on the number of passengers waiting for a shuttle bus.
  • the capture device includes, for example, an electronic camera set with stereoscopic features, or plural cameras, or a scanner, or a camera in conjunction with a spatially offset directional illuminator, a moving capture device in conjunction with synthetic aperture analysis, or any other capture device that captures space diverse views of objects, or polar capture device (direction and distance from a single viewpoint) for deriving a three-dimensional representation of the objects including RADAR, LIDAR, or LADAR direction controlled range-finders or three-dimensional imaging sensors (one such device was announced by Canesta, Inc.).
  • image capture includes at least one of static image capture and dynamic image capture where dynamic image is derived from the motion of the object using successive captured image frames.
  • the capture device includes a memory to store the captured image. Accordingly, the stored captured image may be analyzed by the processing device in near real-time; that is shortly after the image was captured.
  • An interface is provided to selectively connect at least one capture device to at least one processing device to enable each segment of the parking lot to be sequentially scanned.
  • the image data remains current providing the time interval between successive scans is relatively short, such as, but not limited to, for example, less than one second.
  • the data representing an image includes information related to at least one of color, and texture of the parking lot and the objects therein.
  • This data may be stored in the database and is correlated with selected information, such as, for example, at least one of parking space identification by number, row, section, and the date the data representing the image of the object was produced, and the time the data representing the image of the object was produced.
  • selected information such as, for example, at least one of parking space identification by number, row, section, and the date the data representing the image of the object was produced, and the time the data representing the image of the object was produced.
  • a still further feature of the invention is the inclusion of a pattern generator that projects a predetermined pattern onto the parking lot and the objects therein.
  • the predetermined pattern projected by the pattern generator may be, for example, a grid pattern, and/or a plurality of geometric shapes.
  • a method for measuring and/or characterizing selected parking spaces of the parking lot.
  • the method produces data that represents an image of an object and processes the data to derive a three-dimensional model of the parking lot which is stored in a database.
  • the data indicates at least one specific property of the selected parking space of the parking lot, wherein a change in at least one specific property is determined by comparing at predetermined time intervals the three-dimensional model with at least one previously derived three-dimensional model stored in the database.
  • image capture includes at least one of static image capture and dynamic image capture where dynamic image is derived from the motion of the object using successive captured image frames.
  • the captured image is stored in memory, so that, for example, it is processed in near real-time, that is predetermined time after the image was captured; and/or at a location remote from where the image was captured.
  • a still further object of the invention comprises an apparatus for measuring and/or characterizing features of an object, comprising an imaging device that captures a two-dimensional image of the object and a processing device that processes the captured image to produce a three-dimensional representation of the object.
  • the three- dimensional representation includes parameters indicating a predetermined feature of the object.
  • the apparatus also comprises a database that stores the parameters and a comparing device that compares the stored parameters to previously stored parameters related to the monitored space to determine a change in the three-dimensional representation of the monitored space.
  • the apparatus also comprises a reporting/display device that uses results of the comparison by the comparing device to generate a report pertaining to a change in the monitored space.
  • Fig. 1 illustrates a first embodiment of an apparatus for analyzing the presence or absence of objects on parking spaces of a parking lot
  • Fig. 2 illustrates a multi-sensor image processing arrangement according to the present invention
  • FIG. 3 illustrates an example of a processing device of the present invention
  • Figs. 4(a) to 4(e) illustrate optical image transformations produced by the invention of Fig. 1;
  • Fig. 5 illustrates an example of a stereoscopic process for three-dimensional mapping to determine the location of each recognizable landmark on both left and right images produced by the capture device of Fig. 1 ;
  • Fig. 6 illustrates a second embodiment of the present invention
  • Fig. 7 illustrates a grid form pattern produced by a pattern generator used with the second embodiment of the invention
  • Figs. 8(a) and 8(b) represent left and right images, respectively, that were imaged with the apparatus of the second embodiment
  • Fig. 9 illustrates an example of a parking space occupancy routine according to the present invention
  • Fig. 10 illustrates an example of an Executive Process subroutine called by the parking space occupancy routine of Fig. 9;
  • Fig. 11 illustrates an example of a Configure subroutine called by the parking space occupancy routine of Fig. 9;
  • Fig. 12 illustrates an example of a System Self-Test subroutine called by the parking lot occupancy routine of Fig. 9;
  • Fig. 13 illustrates an example of a Calibrate subroutine called by the parking space occupancy routine of Fig. 9;
  • Fig. 14 illustrates an example of an Occupancy Algorithm subroutine called by the parking space occupancy routine of Fig. 9; and Fig. 15 illustrates an example of an Image Analysis subroutine called by the parking space occupancy detection routine of Fig. 14.
  • an image of an area to be monitored such as, but not limited to, for example, part of a parking lot 5 (predetermined area) is obtained, and the obtained image is processed to determine features of the predetermined area (status), such as, but not limited to, for example, a parked vehicle 4 and/or person within the predetermined area.
  • predetermined area part of a parking lot 5
  • Fig. 1 illustrates an embodiment of the current invention.
  • two cameras 100a and 100b act as a stereoscopic camera system.
  • Suitable cameras include, but are not limited to, for example, an electronic or digital camera that operates to capture space diverse views of objects, such as, but not limited to, for example, the parking lot 5 and the vehicle 4.
  • the cameras 100a and 100b for obtaining stereoscopic images by ti ⁇ angulation are shown.
  • a limited number of camera setups will be described herein, it is understood that other (non-disclosed) setups may be equally acceptable and are not precluded by the present invention.
  • a similar stereoscopic triangulation effect can be obtained by multiple spatially-offset cameras to capture multiple views of an image. It is further understood that a stereoscopic triangulation can be obtained by any capture device that captures space diverse views of the parking lot and the objects therein. Furthermore, the present invention employing a single stationary capture device in conjunction with, but not limited to, for example, a spatially offset direction controllable illuminator to obtain the stereoscopic triangulation effect.
  • a polar-sensing device for deriving a three-dimensional representation of the objects in the parking lot including direction-controlled range-finder or three- dimensional imaging sensor (such as, for example, manufactured by Canesta Inc.) may be used without departing from the spirit and /or scope of the present invention.
  • the cameras 100a and 100b comprise a charge- couple device (CCD) sensor or a CMOS sensor.
  • CCD charge- couple device
  • CMOS complementary metal-oxide-semiconductor
  • the sensor comprises, for example, a two-dimensional scanning line sensor or matrix sensor.
  • the present invention is not limited to the particular camera construction or type described herein.
  • a digital still camera, a video camera, a camcorder, or any other electrical, optical, or acoustical device that records (collects) information (data) for subsequent three-dimensional processing may be used.
  • a single sensor may be used when an optical element is applied to provide space diversity (for example, a periscope) on a common CCD sensor and where each of the two images are captured by respective halves of the CCD sensor to provide the data for stereoscopic processing.
  • the image (or images) captured by the camera (or cameras) can be processed substantially "in real time” (e.g., at the time of capturing the image(s)), or stored in, for example, a memory, for delayed processing, without departing from the spirit and/or scope of the invention.
  • a location of the cameras 100a and 100b relative to the vehicle 4, and in particular, a distance (representing a spatial diversity) between the cameras 100a and 100b determines the effectiveness of a stereoscopic analysis of the object 4 and the parking lot 5.
  • dotted lines in Fig. 1 depict the optical viewing angle of each camera.
  • each image captured by the cameras 100a and 100b and their respective sensors are converted to electrical signals having a format that can be utilized by an appropriate image processing device (e.g., a computer 25 shown in Fig. 2, that executes an appropriate image processing routine), so as to, for example, process the captured image, analyze data associated with the captured image, and produce a report related to the analysis.
  • an appropriate image processing device e.g., a computer 25 shown in Fig. 2, that executes an appropriate image processing routine
  • a selector switch 40 enables selection of two cameras from among a plurality of cameras that are dispersed over the parking lot 5 to provide complementary images suitable for stereoscopic analysis.
  • the two obtained images are transformed by an external frame capture device 42.
  • the image processor (e.g. computer) 25 may employ an internal frame capture 26 (Fig.3) may be used.
  • the frame capture (grabber) converts to a format recognizable by the computer 25 and its processor 29 (Fig.3).
  • a digital or analog bus for collecting image data from a selected pair of cameras instead of the selector switch or other image data conveyances, can be used without departing from the spirit and/or scope of the invention.
  • FIG. 3 illustrates in greater detail the computer 25, including internal and external accessories, such as, but not limited to, a frame capture device 26, a camera controller 26a, a storage device 28, a memory (e.g., RAM) 27, a display controller 30, a switch controller 31 (for controlling selector switch 40), at least one monitor 32, a keyboard 34 and a mouse 36.
  • internal and external accessories such as, but not limited to, a frame capture device 26, a camera controller 26a, a storage device 28, a memory (e.g., RAM) 27, a display controller 30, a switch controller 31 (for controlling selector switch 40), at least one monitor 32, a keyboard 34 and a mouse 36.
  • a frame capture device 26 such as, but not limited to, a camera controller 26a, a storage device 28, a memory (e.g., RAM) 27, a display controller 30, a switch controller 31 (for controlling selector switch 40), at least one monitor 32, a keyboard 34 and a mouse 36.
  • a switch controller 31 for controlling selector switch 40
  • the computer 25 employed with the present invention comprises, for example, a personal computer based on an Intel microprocessor 29, such as, for example, a Pentium III microprocessor (or compatible processor, such as, for example, an Athlon processor manufactured by AMD), and utilizes the Windows operating system produced by Microsoft Corporation.
  • Intel microprocessor 29 such as, for example, a Pentium III microprocessor (or compatible processor, such as, for example, an Athlon processor manufactured by AMD), and utilizes the Windows operating system produced by Microsoft Corporation.
  • the construction of such computers is well known to those skilled in the art, and hence, a detailed description is omitted herein.
  • computers utilizing alternative processors and operating systems such as, but not limited to, for example, an Apple Computer or a Sun computer, may be used without departing from the scope and/or spirit of the invention.
  • the operations depicted in Fig. 4 function to derive a three-dimensional model of the object of interest and its surroundings. Extrapolation of the captured image provides an estimate of the three-dimensional
  • the computer 25 may be integrated into a single circuit board, or it may comprise a plurality of daughter boards that interface to a motherboard. While the present invention discloses the use of a conventional personal computer that is "customized" to perform the tasks of the present invention, it is understood that alternative processing devices, such as, for example, programmed logic array designed to perform the functions of the present invention, may be substituted without departing from the spirit and/or scope of the invention.
  • the temporary storage device 27 stores the digital data output from the frame capture device 26.
  • the temporary storage device 27 may be, for example, RAM memory that retains the data stored therein as long as electrical power is supplied to the RAM.
  • the long-term storage device 28 comprises, for example, a non-volatile memory and/or a disk drive.
  • the long-term storage device 28 stores operating instructions that are executed by the invention to determine the occupancy status of parking space.
  • the storage device 28 stores routines (to be described below) for calibrating the system, and for performing a perspective correction, and 3D mapping.
  • the display controller 30 comprises, for example, an ASUS model V7100 video card.
  • Figs. 4(a) to 4(e) illustrate optical image transformations produced by the stereoscopic camera set 100a and 100b of Fig. 1, as well as initial image normalization in the electronic domain.
  • the object e.g. the parking lot 5 and its contents
  • Fig. 4(a) the object is illustrated as a rectangle with an "X" marking its right half. The marking helps in recognizing the orientation of images.
  • Object 4 is in a skewed plane to the cameras' focal planes, and faces the cameras of Fig. 1.
  • Figs. 4(b) to 4(e) will refer to "right” and “left”. However, it is understood that use of the terminology such as, for example, "left", right” is simply used to differentiate between plural images produced by the cameras 100a and 100b.
  • Fig. 4(b) represents an image 200 of the object 4 as seen through a left camera (100a in Fig. 1), show ig a perspective distortion (e.g., trapezoidal distortion) of the image and maintaining the same orientation ("X" marking on the right half as on the object 4 itself).
  • a perspective distortion e.g., trapezoidal distortion
  • Fig. 4(c) represents an image 202 of the object 4 as seen through a right camera (100b in Fig. 1) showing a perspective distortion (e.g., trapezoidal distortion) and maintaining the original orientation ("X" marking on the right half as on the object 4 itself).
  • a perspective distortion e.g., trapezoidal distortion
  • additional distortions may also occur as a result of, but not limited to, for example, an imperfection in the optical elements, and/or an imperfection in the cameras' sensors.
  • the images 204 and 206 must be restored to minimize the distortion effects within the resolution capabilities of the cameras' sensors.
  • the image restoration is done in the electronic and software domains by the computer 25. There are circumstances where the distortions can be tolerated and no special corrections are necessary. This is especially true when the space diversity (the distance between cameras) is small.
  • a database is employed to maintain a record of the distortion shift for each pixel of the sensor of each camera for best accuracy attainable.
  • the present invention will function with uncorrected (e.g. inherent ) distortions of each camera.
  • the database is created at the time of installation of the system, when the system is initially calibrated, and may be updated each time periodic maintenance of the systems' cameras is performed.
  • calibration of the system may be performed at any time without departing from the scope and/or spirit of the invention.
  • the information stored in the database is used to perform a restoration process of the two images, if necessary, as will be described below.
  • This database may be stored, for example, in the computer 25 used with the cameras 100a and 100b.
  • Image 204 in Fig. 4(d) represents a restored version of image 200, derived from the left camera's focal plane sensor, which includes a correction for the above- noted perspective distortion.
  • image 206 in Fig. 2(e) represents a restored version of image 206, derived from the right camera's focal plane sensor, which includes a correction for the above-noted perspective distortion.
  • Fig. 5 illustrates a stereoscopic process for three-dimensional mapping. Parking lots and parked vehicles generally have irregular, three-dimensional shapes. In order to simplify the following discussion, an explanation is set forth with respect to three points of a concave pyramid (not shown); a tip 220 of the pyramid, a projection 222 of the tip 220 on a base of the pyramid pe ⁇ endicular to the base, and a corner 224 of the base of the pyramid. The tip 220 points away from the camera (not shown). [0039] Flat image 204 of Fig. 4(d) and flat image 206 of Fig. 4(e) are shown in Fig.
  • Fig. 5 illustrates the geometrical relationship between the stereoscopic images 204 and 206 of the pyramid and the three-dimensional pyramid defined by the reconstructed tip 220, its projection 222 on the base, and the corner 224 of the base. It is noted that a first image point 226 corresponding to reconstructed tip of the pyramid 220 is shifted to the left with respect to the projection of the tip 228 on the flat object corresponding to the point of the reconstructed projection point 222 of the reconstructed tip 220.
  • a second image point 230 corresponding to the reconstructed tip of the pyramid 220 is shifted to the right with respect to a projection point 232 on the flat object corresponding to the reconstructed projection point 222 of the reconstructed tip 220.
  • the image points 234 and 236 corresponding to the corner 224 of the base of the pyramid are not shifted because the corner is part of the pyramid's base.
  • the first reconstructed point 222 of the reconstructed tip 220 on the base is derived as a cross-section between lines starting at projected points 228 and 232, and is inclined at an angle, as viewed by the left camera 100a and the right camera 100b respectively.
  • the reconstructed tip 220 is determined from points 226 and 230, whereas a corner point 224 is derived from points 234 and 236.
  • reconstructed points 224 and 222 are on a horizontal line that represent a plane of the pyramid base. It is further noted that reconstructed point 220 is above the horizontal line, indicating a location outside the pyramid base plane on a distant side relative to the cameras.
  • the process of mapping the three-dimensional object is performed in accordance with rules implemented by a computer algorithm executed by the computer. 25.
  • the three-dimensional analysis of a scene is performed by use of static or dynamic images. A static image is obtained from a single frame of each capture device.
  • a dynamic image is obtained as a difference of successive frames of each capture device and is executed when objects of interest are in motion. It is noted that using a dynamic image to perform the three-dimensional analysis results in reduction of "background clutter" and enhances the delineation of moving objects of interest by, for example, subtracting successive frames, one from another, resulting in cancellation of all stationary objects captured in the images.
  • the present system may be configured to present a visual image of a specific parking lot section being monitored, thus allowing the staff to visually confirm the condition of the parking lot section.
  • a parking lot customer parking availability notification occupancy display (not shown) comprise distributed displays positioned throughout the parking lot directing drivers to available parking spaces. It is understood that alphanumeric or arrow messages for driver direction, such as, but not limited to, for example, a visual monitor or other optoelectric or electro-mechanical device, may be employed, either alone or in combination, without departing from the spirit and/or scope of the invention.
  • the system of the present invention uniquely determines the location of a feature as follows: digital cameras (sometimes in conjunction with frame capture devices) present the image they record to the computer 25 in the form of a rectangular array (raster) of "pixels" (picture elements), such as, for example 640x480 pixels. That is, the large rectangular image is composed of rows and columns of much smaller pixels, with 640 columns of pixels and 480 rows of pixels.
  • a pixel is designated by a pair of integers, , that represent a horizontal location "a" and a vertical location "b" in the raster of camera i.. Each pixel can be visualized as a tiny light beam emanating from a point at the scene into the sensor (camera) 100a or 100b in a particular direction.
  • the camera does not "know” where along that beam the "feature” which has been identified is located. However, when the same feature has been identified by two spatially diverse cameras, the point where the two "beams” from the two cameras cross precisely locates the feature in the three-dimensional space of the monitored parking lot segment. For example, the calibration process (to be described below) determines which pixel addresses (a,b) lie nearest any three-dimensional point (x,y,z) in the monitored space of the parking lot. Whenever a feature on a vehicle is visible in two (or more) cameras, the three-dimensional location of the feature can be obtained by interpolation in the calibration data. [0044] The operations performed by the computer 25 on the data obtained by the cameras will now be described. An initial image view C' J captured by a camera is processed to obtain a two-dimensional physical perspective representation. The two- dimensional physical perspective representation of the image is transformed via a general metric transformation:
  • i and k are indices that range from 1 to N x , where N x is the number of pixels in a row, and j and / are indices that range from 1 to N ⁇ , where N ⁇ is the number of pixels in a column.
  • the transformation from the image view C lJ to the physical image P lJ is a linear transformation governed by , which represents both a rotation and a dilation of the
  • a three-dimensional correlation is performed on all observed features which are uniquely identified in both images. For example, if L 1J and R lJ are defined as the left and right physical images of the object under study, respectively, then is the three-dhensional physical representation of all uniquely-defined points visible in a feature of the object which can be seen in two cameras, whose images are designated by
  • the transformation function / is derived by using the physical transformations for the L and R cameras and the physical geometry of the stereo pair derived from the locations of the two cameras.
  • Fig. 6 A second embodiment of a camera system used with the present invention is illustrated in Fig. 6. A discussion of the elements that are common to those in Fig. 1 is omitted herein; only those elements that are new will be described. [0047] The second embodiment differs from the first embodiment shown in Fig. 1 by the inclusion of a pattern projector (generator) 136.
  • the pattern projector 136 assists in the stereoscopic object analysis for the three-dimensional mappmg of the object.
  • the second embodiment of the present invention employs the pattern generator 136 to project a pattern of light (or shadows).
  • the pattern projector 136 is shown to illuminate the object (vehicle) 4 and parking lot segment 5 from a vantage position of the center between camera 100a and 100b.
  • the pattern generator may be located at different positions without departing from the scope and/or spirit of the invention.
  • the pattern generator 136 projects at least one of a stationary and a moving pattern of light onto the parking lot 5 and the object (vehicle) 4 and all else that are within the view of the cameras 100a and 100b .
  • the projected pattern is preferably invisible (for example, infrared) light, so long as the cameras can detect the image and/or pattern of light. However, visible light may be used without departing from the scope and/or spirit of the invention. It is noted that the projected pattern is especially useful when the object (vehicle) 4 and/or its surroundings are relatively featureless (parking lot covered by snow), making it difficult to construct a three-dimensional representation of the monitored scene. It is further noted that a moving pattern enhances image processing by the application of dynamic three-dimensional analysis.
  • Fig. 7 illustrates an example of a grid form pattern 138 projected by the pattern projector 136.
  • the pattern can vary from a plain quadrille grid or a dot pattern to more distinct marks, such as many different small geometrical shapes in an ordered or random pattern.
  • dark lines are created on an illuminated background.
  • a moving point of light such as, for example, a laser scan pattern, can be utilized.
  • a momentary illumination of the entire area can provide an overall frame of reference.
  • Fig. 8(a) illustrates a left image 140
  • Fig. 8(b) illustrates a right image 142 of a stereoscopic view of a concave volume produced by the stereoscopic camera 100, along with a distortion 144 and 146 of the grid form pattern 138 on the left and right images 140 and 142, respectively.
  • the distortion 144 and 146 represents a gradual horizontal displacement of the grid form pattern to the left in the left image 140, and a gradual horizontal displacement of the grid form pattern to the right in the right image 142.
  • a variation of the second embodiment involves using a pattern generator that projects a dynamic (e.g., non-stationary) pattern, such as a raster scan onto the object (vehicle) 4 and the parking lot 5 and all else that is in the view of the cameras 100a and 100b.
  • the cameras 100a and 100b capture the reflection of the pattern from the parking lot 5 and the object (vehicle) 4 that enables dynamic image analysis as a result of motion registered by the capture device.
  • Another variation of the second embodiment is to use a pattern generator that projects uniquely-identifiable patterns, such as, but not limited to, for example, letters, numbers or geometric patterns, possibly in combination with a static or dynamic featureless pattern. This prevents the mislabeling of identification of intersections in stereo pairs, that is, incorrectly correlating an intersection in a stereo pair with one in a second photo of the pair, which is actually displaced one intersection along one of the grid lines.
  • a pattern generator that projects uniquely-identifiable patterns, such as, but not limited to, for example, letters, numbers or geometric patterns, possibly in combination with a static or dynamic featureless pattern.
  • Images obtained from camera 100a and 100b are formatted by the frame capture device 26 to derive parameters that describe the position of the object (vehicle) 4.
  • This data is used to form a database that is stored in either the short-term storage device 27 or the long-term storage device 28 of the computer 25.
  • subsequent images are then analyzed in real-time and compared to previous data for changes in order to determine the motion, and/or rate of motion and/or change of orientation of the vehicle 4. This data is used to characterize the status of the vehicle.
  • a database for the derived parameters may be constructed using a commercially available software program called ACCESS, which is sold by Microsoft. If desired, the raw image may also be stored.
  • ACCESS commercially available software program
  • the construction and/or operation of the present invention is not to be construed to be limited to the use of Microsoft ACCESS.
  • Subsequent images are analyzed for changes in position, motion, rate of motion and/or change of orientation of the object.
  • the tracking of the sequences of motion of the vehicle enables dynamic image analysis and provides further optional improvement to the algorithm.
  • the comparison of sequential images (that are, for example, only seconds apart) of moving or standing vehicles can help identify conditions in the parking lot that due to partial obstructions may not be obvious from a static analysis.
  • the analysis can capture the individuals walking in the parking lot and help monitor their safety or be used for other security and parking lot management pu ⁇ oses.
  • incidents on the parking lot can be played back to provide evidence for the parties in the form of a sequence of events of an occurrence.
  • the present invention additionally serves as a security device.
  • a specific software implementation of the present invention will now be described. However, it is understood that variations to the software implementation may be made without departing from the scope and/or spirit of the invention. While the following discussion is provided with respect to the installation of the present invention in one section of a parking lot, it is understood that the invention is applicable to any size or type of parking facility by duplicating the process in other segments.
  • Fig. 9 illustrates the occupancy detection process that is executed by the present invention. Initially, an Executive Process subroutine is called at step SI 0. Once this subroutine is completed, processing proceeds to step S12 to determine whether a Configuration Process is to be performed. If the determination is affirmative, processing proceeds to step SI 4, wherein the Configuration subroutine is called. Once the Configuration subroutine is completed, processing continues at step SI 6. On the other hand, if the determination at step S12 is negative, processing proceeds from step S12 to S16.
  • step SI 6 a determination is made as to whether a Calibration operation should be performed. If it is desired to calibrate the system, processing proceeds to step SI 8, wherein the Calibrate subroutine is called, after which, a System Self-test operation (step S20) is called. However, if it is determined that a system calibration is not required, processing proceeds from step S16 to step S20.
  • step S22 an Occupancy Algorithm subroutine
  • Fig. 10 illustrates the Executive Process subroutine that is called at step S10.
  • a Keyboard Service process is executed at step S30, which responds to operator input via a keyboard 34 (see Fig. 3) that is attached to the computer 25.
  • a Mouse Service process is executed at step S32, in order to respond to operator input from a mouse 36 (see Fig. 3).
  • an occupancy display has been activated, an Occupancy Display Service process is performed (step S34). This process determines whether and when additional occupancy display changes must be executed to insure that they reflect the latest parking lot condition and provide proper guidance to the drivers.
  • Step S36 is executed when the second embodiment is used.
  • step S36 is deleted or bypassed (not executed).
  • projector 136 (Fig. 6) is controlled to generate patterns of light to provide artificial features on the object when the visible features are not sufficient to determine the condition of the object.
  • Fig. 11 illustrates the Configure subroutine that is called at step S14. This subroutine comprises a series of operations, some of which are performed automatically and some of which require operator input.
  • Step S40 the capture device (such as one or more cameras) are identified, along with then coordinates (locations). It is also noted that some cameras may be designed to automatically identify themselves, while other cameras may require identification by the operator. It is noted that this operation to update system information is required only when the camera (or its wiring) is changed.
  • Step S42 is executed to identify what video switches and capture boards are installed in the computer 25, and to control the cameras (via camera controller 26a shown in Fig. 3) and convert their video to computer usable digital form. It is noted that some cameras generate data in a digital form already compatible with computer formats and do not require such conversion. Thereafter, step S44 is executed to inform the system of which segment of the parking lots is to be monitored.
  • Fig. 12 illustrates the operations that are performed when the System Self-test subroutine (step 20) is called.
  • This subroutine begins with a Camera Synchronization operation (step S50), in which the cameras are individually tested, and then, re-tested in concert to insure that they can capture video images of monitored volume(s) with sufficient simultaneity that stereo pairs of images will yield accurate information about the monitored parking lot segment.
  • a Video Switching operation is performed (step S52) to verify that the camera video can be transferred to the computer 25.
  • step S54 An Image Capture operation is also performed (step S54) to verify that the images of the monitored volume, as received from the cameras, are of sufficient quality to perform the tasks required of the system.
  • the operation of the computer 25 is then verified (step S56), after which, processing returns to the routine shown in Fig. 9.
  • the Calibrate subroutine called at step SI 8 is illustrated in Fig. 13.
  • the calibration operation is performed when the monitored parking lot segment is empty of vehicles.
  • the system captures the lines which delineate the parking spaces in the monitored parking lot predetermined area as part of deriving the parking lot parameters.
  • Each segment of demarcation lines between parking spaces is determined and three-dimensionally defined (step S62) and stored as part of a baseline in the database (step S64). It is noted that three-dhensional modeling of a few selected points on the demarcation lines between parking spaces can define the entire demarcation line cluster.
  • Height calibration is performed when initial installation is completed. When height calibration is requested by the computer operator and verified by step S66, the calibration is performed by collecting height data (step S68) of an individual of known height. The individual walks on a selected path within the monitored parking lot segment while wearing distinctive clothing that contrasts well with the parking lot's surface (e.g., a white hard-hat if the parking lot surface is black asphalt).
  • the height analysis can be performed on dynamic images since the individual target is in motion (dynamic analysis is often considered more reliable than static analysis). In this regard, the results of the static and dynamic analyses may be superimposed (or otherwise combined, if desired).
  • the height data is stored in the database as another part of a baseline for reference (step S70).
  • the height calibration is set to either a predetermined duration, (e.g. two minutes) or by verbal coordination by the computer operator that instructs the height data providing individual to walk through the designated locations on the parking lot until the height is completed.
  • the calibration data is collected to the nearest pixel of each camera sensor. The camera resolution will therefore have an impact on the accuracy of the calibration data as well as the occupancy detection process.
  • Fig. 14 illustrates the Occupancy Algorithm subroutine that is called at step S22.
  • an Image Analysis subroutine (to be described below) is called at step S80.
  • Image preprocessing methods common in the field of image processing such as, but not limited to, for example, outlier detection and time-domain integration, are performed to reduce the effects of camera noise, artifacts, and environmental effects (e.g. glare), on subsequent processing.
  • Edge enhancing processes common in the field of image processing are performed to provide clear delineation between objects in the captured images.
  • dynamic image analysis is utilized.
  • Image analysis data is processed as dynamic analysis when, for example, a vehicle is stationary but wind driven tree branches cast a moving shadow on the vehicle's surface. Since the moving shadows reflected from the vehicle's surface are registered by the capture device as moving objects, they are suitable for dynamic analysis.
  • the image analysis subroutine creates a list for each camera, in which the list contains data of: objects and feature(s) on the monitored parking lot segment for each camera.
  • step S84 processing resumes at step S84, where common elements (features) seen by two cameras are determined. For each camera that sees each list element, a determination is made as to whether only one camera sees the feature or whether two cameras see the feature. If only one camera sees the feature, a two-dimensional model is constructed (step S86). The two-dimensional model estimates where the feature would be on the parking lot surface, and where it would be if the vehicle was parked'at a given parking space.
  • step S88 the three-dimensional location of the feature is determined at step S88. Correlation between common features in images of more than one camera can be performed directly or by transform function (such as Fast Fourier Transform) of a feature being correlated. Other transform functions may be employed for enhanced common feature correlation without departing from the scope and/or spirit of the instant invention. It is noted that steps S84, S86 and S88 are repeated for each camera that sees the list element. It is also noted that once a predetermined number of three-dimensional correlated features of two camera images are determined to be above a predetermined occupancy threshold of a given parking space, that parking space is deemed to be occupied and no further feature analysis is required.
  • transform function such as Fast Fourier Transform
  • Both the two-dimensional model and the three-dimensional model assemble the best estimate of where the vehicle is relative to the parking area surface, and where any unknown objects are relative to the parking area surface (step S90) at each parking space. Then, at step S92, the objects for which a three-dimensional model is available are tested. If the model places the object close enough to the parking lot surface to be below a predetermined occupancy threshold, an available flag is set (step S94) to set the occupancy displays.
  • Fig. 15 illustrates the Image Analysis subroutine that is called at step S80. As previously noted, this subroutine creates a list for each camera, in which the list contains data of objects and feature(s) on the monitored parking lot segment for each camera.
  • step SI 20 is executed to obtain camera images in real-time (or near realtime).
  • Three-dimensional models of the monitored object is maintained in the temporary storage device (e.g., RAM) 27 of the computer 25.
  • an operation to identify the object is initiated (step S122). In the disclosed embodiments, this is accomplished by noting features on the object 4 and determining whether they are found and are different from the referenced empty parking lot segment (as stored in the database). If they are found, the three-dimensional model is updated. However, if only one camera presently sees the object, a two-dimensional model is constructed. Note that the two-dimensional model will rarely be utilized if the camera placement ensures that each feature is observed by more than one camera.
  • the indicating device provides an indication of the availability of at least one available parking space (that is, an indication of empty parking spaces are provided).
  • the present invention may alternatively provide an indication of which parking space(s) are occupied.
  • the present invention may provide an indication of which parking space(s) is (are) available for parking and which parking space(s) is (are) unavailable for parking.
  • the present invention may be utilized for parking lot management functions. These functions include, but are not limited to, for example, ensuring the proper utilization of handicapped parking spaces, the scheduling of shuttle transportation, and for determining the speed at which the vehicles travel in the parking lot.
  • the availability of handicapped spaces may be periodically adjusted according to statistical evidence of their usage, as derived from the occupancy data (status).
  • Shuttle transportation may be effectively scheduled based on the number of passengers recorded by the three- dimensional model (near real-time) at a shuttle stop.
  • the scheduling may, for example, be determined based, for example, on the amount of time individual's wait at a shuttle stop.
  • Vehicle speed control can be determined, for example, by a dynamic image analysis of a traveled area of the parking lot. Dynamic image analysis determines the velocity of movement at each monitored location.

Abstract

Method and apparatus for analyzing a status of an object (4) in a predetermined area (6) of a parking lot (5) facility having a plurality of parking spaces. An image of the predetermined area (6) of the parking lot (5) that may include one or more objects (4), is captured. A three-dimensional model is produced from the captured image. A test is then performed on the produced model to determine an occupancy status of at least one parking space in the predetermined area. An indicating device provides information regarding the determined occupancy status.

Description

APPARATUS AND M THOD FOR SENSING THE OCCUPANCY STATUS OF PARKING SPACES LN A PARKING LOT
1. Related Data [0001] The present application expressly incorporates by reference herein the entire disclosure of U.S. Provisional Application No. 60/326,444, entitled "Apparatus and Method for Sensing the Occupation Status of Parking Spaces In a Parking Lot", which was filed on October 3, 2001.
2. Field Of The Invention [0002] The present invention is directed to an apparatus and method for determining the location of available parking spaces and/or unavailable parking spaces in a parking lot (facility). The present invention relates more specifically to an optical apparatus and a method for using the optical apparatus that enables an individual and/or the attending personnel attempting to park a vehicle in the parking lot to determine the location of all unoccupied parking locations in the parking lot.
BACKGROUND AND RELATED INFORMATION [0003] Individuals that are attempting to park their vehicle in a parking lot often have to search for an unoccupied parking space. In a large public parking lot without preassigned parking spaces, such a search is time consuming, harmful to the ecology, and often frustrating.
[0004] As a result, a need exists for an automated "system that determines the availability of parking lots in the parking lot and displays them in a manner visible to the driver. Systems developed to date require sensors (i.e., ultrasonic, mechanical, inductive, and optical) to be distributed throughout the parking lot with respect to every parking space. These sensors have to be removed and reinstalled each time major parking lot maintenance or renovation is undertaken.
[0005] Typically, the vehicles in a parking lot are of a large variety of models and sizes. The vehicles are randomly parked in given parking spaces and the correlation between given vehicles and given parking spaces changes regularly. Further, It is not uncommon for other objects, such as, but not limited to, for example, construction equipment and/or supplies, dumpsters, snow plowed into a heap, and delivery crates to be located in a location normally reserved for a vehicle. Moreover, the images of all parking spaces change as a function of light condition within a 24 hour cycle and from one day to the next. Changes in weather conditions, such as wet pavement or snow cover, will further complicate the occupancy determination and decrease the reliability of such a system. SUMMARY OF THE INVENTION
[0006] Accordingly, an object of the present invention is to reliably and accurately determine the status of at least one parking space in a parking lot (facility). The present invention is easily installed and operated and is most suitable to large open space or outdoor parking lots. According to the present invention, a digital three-dimensional model of a given parking lot is mapped (e.g. an identification procedure is performed) to accurately determine parking space locations where parking spaces are occupied and where parking spaces are not occupied (e.g the status of the parking space) at a predetermined time period. A capture device produces data representing an image of an object. A processing device processes the data to derive a three-dimensional model of the parking lot, which is stored in a database. A reporting device, such as, for example, an occupancy display, indicates the parking space availability. The processing device determines a change in at least one specific property by comparing the three-dimensional model with at least one previously derived three-dimensional model stored in the database. It is understood that a synclironized image capture is a substantially concurrent capture of an image. The degree of synchronization of image capture influences the accuracy of the three-dimensional model when changes are introduced at the scene as a function of time. Additionally, the present invention has the capability of providing information that assists in the management of the parking lot such as, but not limited to, for example, adjusting the number of handicapped spaces, based on the need for such parking spaces over time and adjusting the number and adjusting the frequency of shuttle bus service based on the number of passengers waiting for a shuttle bus. It is noted that utility of handicapped parking spaces is effective when, for example, a predetermined percentage of unoccupied handicapped parking spaces are available for new arrivals. [0007] According to an advantage of the invention, the capture device includes, for example, an electronic camera set with stereoscopic features, or plural cameras, or a scanner, or a camera in conjunction with a spatially offset directional illuminator, a moving capture device in conjunction with synthetic aperture analysis, or any other capture device that captures space diverse views of objects, or polar capture device (direction and distance from a single viewpoint) for deriving a three-dimensional representation of the objects including RADAR, LIDAR, or LADAR direction controlled range-finders or three-dimensional imaging sensors (one such device was announced by Canesta, Inc.). It is noted that image capture includes at least one of static image capture and dynamic image capture where dynamic image is derived from the motion of the object using successive captured image frames.
[0008] According to a feature of the invention, the capture device includes a memory to store the captured image. Accordingly, the stored captured image may be analyzed by the processing device in near real-time; that is shortly after the image was captured. An interface is provided to selectively connect at least one capture device to at least one processing device to enable each segment of the parking lot to be sequentially scanned. The image data remains current providing the time interval between successive scans is relatively short, such as, but not limited to, for example, less than one second. [0009] According to another feature of the invention, the data representing an image includes information related to at least one of color, and texture of the parking lot and the objects therein. This data may be stored in the database and is correlated with selected information, such as, for example, at least one of parking space identification by number, row, section, and the date the data representing the image of the object was produced, and the time the data representing the image of the object was produced. [0010] A still further feature of the invention is the inclusion of a pattern generator that projects a predetermined pattern onto the parking lot and the objects therein. The predetermined pattern projected by the pattern generator may be, for example, a grid pattern, and/or a plurality of geometric shapes.
[0011] According to another object of the invention, a method is disclosed for measuring and/or characterizing selected parking spaces of the parking lot. The method produces data that represents an image of an object and processes the data to derive a three-dimensional model of the parking lot which is stored in a database. The data indicates at least one specific property of the selected parking space of the parking lot, wherein a change in at least one specific property is determined by comparing at predetermined time intervals the three-dimensional model with at least one previously derived three-dimensional model stored in the database. [0012] According to an advantage of the present invention, a method of image capture and derivation of a three-dimensional image by stereoscopic triangulation using spatially diverse at least one of an image capture device and a directional illumination device, by polar analysis using directional ranging devices, or by synthetic aperture analysis using a moving capture device. It is noted that image capture includes at least one of static image capture and dynamic image capture where dynamic image is derived from the motion of the object using successive captured image frames.
[0013] According to a further advantage of this method, the captured image is stored in memory, so that, for example, it is processed in near real-time, that is predetermined time after the image was captured; and/or at a location remote from where the image was captured.
[0014] According to a still further object of the invention, a method is disclosed for characterizing features of an object, in which an initial image view is transformed to a two-dimensional physical perspective representation of an image corresponding to the object. The unique features of the two-dimensional perspective representation of the image are identified. The identified unique features are correlated to produce a three- dimensional physical representation of all uniquely-identified features and three- dimensional characteristic features of the object are determined. [0015] A still further object of the invention comprises an apparatus for measuring and/or characterizing features of an object, comprising an imaging device that captures a two-dimensional image of the object and a processing device that processes the captured image to produce a three-dimensional representation of the object. The three- dimensional representation includes parameters indicating a predetermined feature of the object. The apparatus also comprises a database that stores the parameters and a comparing device that compares the stored parameters to previously stored parameters related to the monitored space to determine a change in the three-dimensional representation of the monitored space. The apparatus also comprises a reporting/display device that uses results of the comparison by the comparing device to generate a report pertaining to a change in the monitored space.
BRIEF DESCRIPTION OF THE DRAWINGS [0016] The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments, as illustrated in the accompanying drawings which are presented as a non-limiting example, in which reference characters refer to the same parts throughout the various views, and wherein:
Fig. 1 illustrates a first embodiment of an apparatus for analyzing the presence or absence of objects on parking spaces of a parking lot;
Fig. 2 illustrates a multi-sensor image processing arrangement according to the present invention;
Fig. 3 illustrates an example of a processing device of the present invention; Figs. 4(a) to 4(e) illustrate optical image transformations produced by the invention of Fig. 1;
Fig. 5 illustrates an example of a stereoscopic process for three-dimensional mapping to determine the location of each recognizable landmark on both left and right images produced by the capture device of Fig. 1 ;
Fig. 6 illustrates a second embodiment of the present invention; Fig. 7 illustrates a grid form pattern produced by a pattern generator used with the second embodiment of the invention; Figs. 8(a) and 8(b) represent left and right images, respectively, that were imaged with the apparatus of the second embodiment;
Fig. 9 illustrates an example of a parking space occupancy routine according to the present invention; Fig. 10 illustrates an example of an Executive Process subroutine called by the parking space occupancy routine of Fig. 9;
Fig. 11 illustrates an example of a Configure subroutine called by the parking space occupancy routine of Fig. 9;
Fig. 12 illustrates an example of a System Self-Test subroutine called by the parking lot occupancy routine of Fig. 9;
Fig. 13 illustrates an example of a Calibrate subroutine called by the parking space occupancy routine of Fig. 9;
Fig. 14 illustrates an example of an Occupancy Algorithm subroutine called by the parking space occupancy routine of Fig. 9; and Fig. 15 illustrates an example of an Image Analysis subroutine called by the parking space occupancy detection routine of Fig. 14.
DETAILED DISCLOSURE OF THE INVENTION [0017] The particulars shown herein are by way of example and for purposes of illustrative discussion of embodiments of the present invention only and are presented in the cause of providing what is believed to be a most useful and readily understood description of the principles and conceptual aspects of the present invention. In this regard, no attempt is made to show structural details of the present invention in more detail than is necessary for the fundamental understanding of the present invention, the description taken with the drawings make it apparent to those skilled in the art how the present invention may be embodied in practice.
[0018] According to the present invention, an image of an area to be monitored, such as, but not limited to, for example, part of a parking lot 5 (predetermined area) is obtained, and the obtained image is processed to determine features of the predetermined area (status), such as, but not limited to, for example, a parked vehicle 4 and/or person within the predetermined area.
[0019] Fig. 1 illustrates an embodiment of the current invention. As shown in Fig. 1, two cameras 100a and 100b act as a stereoscopic camera system. Suitable cameras include, but are not limited to, for example, an electronic or digital camera that operates to capture space diverse views of objects, such as, but not limited to, for example, the parking lot 5 and the vehicle 4. In the disclosed embodiment, the cameras 100a and 100b for obtaining stereoscopic images by tiϊangulation are shown. In this regard, while a limited number of camera setups will be described herein, it is understood that other (non-disclosed) setups may be equally acceptable and are not precluded by the present invention.
[0020] While the disclosed embodiment utilizes two cameras, it is understood that a similar stereoscopic triangulation effect can be obtained by multiple spatially-offset cameras to capture multiple views of an image. It is further understood that a stereoscopic triangulation can be obtained by any capture device that captures space diverse views of the parking lot and the objects therein. Furthermore, the present invention employing a single stationary capture device in conjunction with, but not limited to, for example, a spatially offset direction controllable illuminator to obtain the stereoscopic triangulation effect. It is further understood that a polar-sensing device (sensing distance and direction) for deriving a three-dimensional representation of the objects in the parking lot including direction-controlled range-finder or three- dimensional imaging sensor (such as, for example, manufactured by Canesta Inc.) may be used without departing from the spirit and /or scope of the present invention.
[0021] In the disclosed embodiment, the cameras 100a and 100b comprise a charge- couple device (CCD) sensor or a CMOS sensor. Such sensors are well know to those skilled in the art, and thus, a discussion of their construction is omitted herein. In the disclosed embodiments, the sensor comprises, for example, a two-dimensional scanning line sensor or matrix sensor. However, it is understood that other types of sensors may be employed without departing from the scope and/or spirit of the instant invention. In addition, it is understood that the present invention is not limited to the particular camera construction or type described herein. For example, a digital still camera, a video camera, a camcorder, or any other electrical, optical, or acoustical device that records (collects) information (data) for subsequent three-dimensional processing may be used. In addition, a single sensor may be used when an optical element is applied to provide space diversity (for example, a periscope) on a common CCD sensor and where each of the two images are captured by respective halves of the CCD sensor to provide the data for stereoscopic processing. [0022] Further, it is understood that the image (or images) captured by the camera (or cameras) can be processed substantially "in real time" (e.g., at the time of capturing the image(s)), or stored in, for example, a memory, for delayed processing, without departing from the spirit and/or scope of the invention. [0023] A location of the cameras 100a and 100b relative to the vehicle 4, and in particular, a distance (representing a spatial diversity) between the cameras 100a and 100b determines the effectiveness of a stereoscopic analysis of the object 4 and the parking lot 5. For purpose of illustration, dotted lines in Fig. 1 depict the optical viewing angle of each camera. Since the cameras 100a and 100b provide for the capturing of a stereoscopic image, two distinct images fall upon the cameras' sensors. [0024] Each image captured by the cameras 100a and 100b and their respective sensors are converted to electrical signals having a format that can be utilized by an appropriate image processing device (e.g., a computer 25 shown in Fig. 2, that executes an appropriate image processing routine), so as to, for example, process the captured image, analyze data associated with the captured image, and produce a report related to the analysis. [0025] As seen in Fig. 2, a selector switch 40 enables selection of two cameras from among a plurality of cameras that are dispersed over the parking lot 5 to provide complementary images suitable for stereoscopic analysis. In the disclosed embodiment, the two obtained images are transformed by an external frame capture device 42. Alternately, the image processor (e.g. computer) 25 may employ an internal frame capture 26 (Fig.3) may be used. The frame capture (grabber) converts to a format recognizable by the computer 25 and its processor 29 (Fig.3). However, it is understood that a digital or analog bus for collecting image data from a selected pair of cameras, instead of the selector switch or other image data conveyances, can be used without departing from the spirit and/or scope of the invention. [0026] Fig. 3 illustrates in greater detail the computer 25, including internal and external accessories, such as, but not limited to, a frame capture device 26, a camera controller 26a, a storage device 28, a memory (e.g., RAM) 27, a display controller 30, a switch controller 31 (for controlling selector switch 40), at least one monitor 32, a keyboard 34 and a mouse 36. However, it is understood that multiple computers and/or different computer architecture can be used without departing from the spirit and/or scope of the invention.
[0027] The computer 25 employed with the present invention comprises, for example, a personal computer based on an Intel microprocessor 29, such as, for example, a Pentium III microprocessor (or compatible processor, such as, for example, an Athlon processor manufactured by AMD), and utilizes the Windows operating system produced by Microsoft Corporation. The construction of such computers is well known to those skilled in the art, and hence, a detailed description is omitted herein. However, it is understood that computers utilizing alternative processors and operating systems, such as, but not limited to, for example, an Apple Computer or a Sun computer, may be used without departing from the scope and/or spirit of the invention. It is understood that the operations depicted in Fig. 4 function to derive a three-dimensional model of the object of interest and its surroundings. Extrapolation of the captured image provides an estimate of the three-dimensional location of the object 4 relative to the surface of the parking lot 5.
[0028] It is noted that all the functions of the computer 25 may be integrated into a single circuit board, or it may comprise a plurality of daughter boards that interface to a motherboard. While the present invention discloses the use of a conventional personal computer that is "customized" to perform the tasks of the present invention, it is understood that alternative processing devices, such as, for example, programmed logic array designed to perform the functions of the present invention, may be substituted without departing from the spirit and/or scope of the invention.
[0029] The temporary storage device 27 stores the digital data output from the frame capture device 26. The temporary storage device 27 may be, for example, RAM memory that retains the data stored therein as long as electrical power is supplied to the RAM. [0030] The long-term storage device 28 comprises, for example, a non-volatile memory and/or a disk drive. The long-term storage device 28 stores operating instructions that are executed by the invention to determine the occupancy status of parking space. For example, the storage device 28 stores routines (to be described below) for calibrating the system, and for performing a perspective correction, and 3D mapping. [0031] The display controller 30 comprises, for example, an ASUS model V7100 video card. This card converts the digital computer signals to a format (e.g., RGB, S- Video, and/or composite video) that is compatible with the associated monitor 32. The monitor 32 may be located proximate the computer 25 or may be remotely located from the computer 25. [0032] Figs. 4(a) to 4(e) illustrate optical image transformations produced by the stereoscopic camera set 100a and 100b of Fig. 1, as well as initial image normalization in the electronic domain. In Fig. 4(a), the object (e.g. the parking lot 5 and its contents 4) is illustrated as a rectangle with an "X" marking its right half. The marking helps in recognizing the orientation of images. Object 4 is in a skewed plane to the cameras' focal planes, and faces the cameras of Fig. 1. For convenience, the following discussion of Figs. 4(b) to 4(e) will refer to "right" and "left". However, it is understood that use of the terminology such as, for example, "left", right" is simply used to differentiate between plural images produced by the cameras 100a and 100b. [0033] Fig. 4(b) represents an image 200 of the object 4 as seen through a left camera (100a in Fig. 1), show ig a perspective distortion (e.g., trapezoidal distortion) of the image and maintaining the same orientation ("X" marking on the right half as on the object 4 itself).
[0034] Fig. 4(c) represents an image 202 of the object 4 as seen through a right camera (100b in Fig. 1) showing a perspective distortion (e.g., trapezoidal distortion) and maintaining the original orientation ("X" marking on the right half as on the object 4 itself).
[0035] It is noted that in addition to the perspective distortion, additional distortions (not illustrated) may also occur as a result of, but not limited to, for example, an imperfection in the optical elements, and/or an imperfection in the cameras' sensors. The images 204 and 206 must be restored to minimize the distortion effects within the resolution capabilities of the cameras' sensors. The image restoration is done in the electronic and software domains by the computer 25. There are circumstances where the distortions can be tolerated and no special corrections are necessary. This is especially true when the space diversity (the distance between cameras) is small. [0036] According to the present invention, a database is employed to maintain a record of the distortion shift for each pixel of the sensor of each camera for best accuracy attainable. It is understood that in the absence of such database, the present invention will function with uncorrected (e.g. inherent ) distortions of each camera. In the disclosed embodiment, the database is created at the time of installation of the system, when the system is initially calibrated, and may be updated each time periodic maintenance of the systems' cameras is performed. However, it is understood that calibration of the system may be performed at any time without departing from the scope and/or spirit of the invention. The information stored in the database is used to perform a restoration process of the two images, if necessary, as will be described below. This database may be stored, for example, in the computer 25 used with the cameras 100a and 100b.
[0037] Image 204 in Fig. 4(d) represents a restored version of image 200, derived from the left camera's focal plane sensor, which includes a correction for the above- noted perspective distortion. Similarly, image 206 in Fig. 2(e) represents a restored version of image 206, derived from the right camera's focal plane sensor, which includes a correction for the above-noted perspective distortion.
[0038] Fig. 5 illustrates a stereoscopic process for three-dimensional mapping. Parking lots and parked vehicles generally have irregular, three-dimensional shapes. In order to simplify the following discussion, an explanation is set forth with respect to three points of a concave pyramid (not shown); a tip 220 of the pyramid, a projection 222 of the tip 220 on a base of the pyramid peφendicular to the base, and a corner 224 of the base of the pyramid. The tip 220 points away from the camera (not shown). [0039] Flat image 204 of Fig. 4(d) and flat image 206 of Fig. 4(e) are shown in Fig. 5 by dotted lines for the object, described earlier, and by solid lines for the stereoscopic images of the three-dimensional object that includes the pyramid. Fig. 5 illustrates the geometrical relationship between the stereoscopic images 204 and 206 of the pyramid and the three-dimensional pyramid defined by the reconstructed tip 220, its projection 222 on the base, and the corner 224 of the base. It is noted that a first image point 226 corresponding to reconstructed tip of the pyramid 220 is shifted to the left with respect to the projection of the tip 228 on the flat object corresponding to the point of the reconstructed projection point 222 of the reconstructed tip 220. Similarly, a second image point 230 corresponding to the reconstructed tip of the pyramid 220 is shifted to the right with respect to a projection point 232 on the flat object corresponding to the reconstructed projection point 222 of the reconstructed tip 220. The image points 234 and 236 corresponding to the corner 224 of the base of the pyramid are not shifted because the corner is part of the pyramid's base. [0040] The first reconstructed point 222 of the reconstructed tip 220 on the base is derived as a cross-section between lines starting at projected points 228 and 232, and is inclined at an angle, as viewed by the left camera 100a and the right camera 100b respectively. In the same manner, the reconstructed tip 220 is determined from points 226 and 230, whereas a corner point 224 is derived from points 234 and 236. Note that reconstructed points 224 and 222 are on a horizontal line that represent a plane of the pyramid base. It is further noted that reconstructed point 220 is above the horizontal line, indicating a location outside the pyramid base plane on a distant side relative to the cameras. The process of mapping the three-dimensional object is performed in accordance with rules implemented by a computer algorithm executed by the computer. 25. The three-dimensional analysis of a scene is performed by use of static or dynamic images. A static image is obtained from a single frame of each capture device. A dynamic image is obtained as a difference of successive frames of each capture device and is executed when objects of interest are in motion. It is noted that using a dynamic image to perform the three-dimensional analysis results in reduction of "background clutter" and enhances the delineation of moving objects of interest by, for example, subtracting successive frames, one from another, resulting in cancellation of all stationary objects captured in the images.
[0041] The present system may be configured to present a visual image of a specific parking lot section being monitored, thus allowing the staff to visually confirm the condition of the parking lot section.
[0042] In the disclosed invention, a parking lot customer parking availability notification occupancy display (not shown) comprise distributed displays positioned throughout the parking lot directing drivers to available parking spaces. It is understood that alphanumeric or arrow messages for driver direction, such as, but not limited to, for example, a visual monitor or other optoelectric or electro-mechanical device, may be employed, either alone or in combination, without departing from the spirit and/or scope of the invention.
[0043] The system of the present invention uniquely determines the location of a feature as follows: digital cameras (sometimes in conjunction with frame capture devices) present the image they record to the computer 25 in the form of a rectangular array (raster) of "pixels" (picture elements), such as, for example 640x480 pixels. That is, the large rectangular image is composed of rows and columns of much smaller pixels, with 640 columns of pixels and 480 rows of pixels. A pixel is designated by a pair of integers, , that represent a horizontal location "a" and a vertical location "b" in the raster of camera i.. Each pixel can be visualized as a tiny light beam emanating from a point at the scene into the sensor (camera) 100a or 100b in a particular direction. The camera does not "know" where along that beam the "feature" which has been identified is located. However, when the same feature has been identified by two spatially diverse cameras, the point where the two "beams" from the two cameras cross precisely locates the feature in the three-dimensional space of the monitored parking lot segment. For example, the calibration process (to be described below) determines which pixel addresses (a,b) lie nearest any three-dimensional point (x,y,z) in the monitored space of the parking lot. Whenever a feature on a vehicle is visible in two (or more) cameras, the three-dimensional location of the feature can be obtained by interpolation in the calibration data. [0044] The operations performed by the computer 25 on the data obtained by the cameras will now be described. An initial image view C'J captured by a camera is processed to obtain a two-dimensional physical perspective representation. The two- dimensional physical perspective representation of the image is transformed via a general metric transformation:
Nγ Ny
P J = Σ Σ gt'JjCkJ + IJ k=\ 1=\
to the "physical" image P' . In the disclosed embodiment, i and k are indices that range from 1 to Nx, where Nx is the number of pixels in a row, and j and / are indices that range from 1 to Nγ, where Nγ is the number of pixels in a column. The transformation from the image view ClJ to the physical image PlJ is a linear transformation governed by
Figure imgf000014_0001
, which represents both a rotation and a dilation of the
image view C i , and hki, which represents a displacement of the image view CH
[0045] A three-dimensional correlation is performed on all observed features which are uniquely identified in both images. For example, if L1J and RlJ are defined as the left and right physical images of the object under study, respectively, then
Figure imgf000014_0002
is the three-dhnensional physical representation of all uniquely-defined points visible in a feature of the object which can be seen in two cameras, whose images are designated by
L and R. The transformation function / is derived by using the physical transformations for the L and R cameras and the physical geometry of the stereo pair derived from the locations of the two cameras. [0046] A second embodiment of a camera system used with the present invention is illustrated in Fig. 6. A discussion of the elements that are common to those in Fig. 1 is omitted herein; only those elements that are new will be described. [0047] The second embodiment differs from the first embodiment shown in Fig. 1 by the inclusion of a pattern projector (generator) 136. The pattern projector 136 assists in the stereoscopic object analysis for the three-dimensional mappmg of the object. Since the stereoscopic analysis and three-dimensional mapping of the object is based on a shift of each point of the object in the right and left images, it is important to identify each specific object point in both the right and left images. Providing the object with distinct markings often known as fiducials, provides the best references for analytical comparison of the position of each point in the right and left images, respectively.
[0048] The second embodiment of the present invention employs the pattern generator 136 to project a pattern of light (or shadows). In the second embodiment, the pattern projector 136 is shown to illuminate the object (vehicle) 4 and parking lot segment 5 from a vantage position of the center between camera 100a and 100b. However, it is understood that the pattern generator may be located at different positions without departing from the scope and/or spirit of the invention.
[0049] The pattern generator 136 projects at least one of a stationary and a moving pattern of light onto the parking lot 5 and the object (vehicle) 4 and all else that are within the view of the cameras 100a and 100b . The projected pattern is preferably invisible (for example, infrared) light, so long as the cameras can detect the image and/or pattern of light. However, visible light may be used without departing from the scope and/or spirit of the invention. It is noted that the projected pattern is especially useful when the object (vehicle) 4 and/or its surroundings are relatively featureless (parking lot covered by snow), making it difficult to construct a three-dimensional representation of the monitored scene. It is further noted that a moving pattern enhances image processing by the application of dynamic three-dimensional analysis.
[0050] Fig. 7 illustrates an example of a grid form pattern 138 projected by the pattern projector 136. It should be appreciated that alternative patterns may be utilized by the present invention without departing from the scope and/or spirit of the invention. For example, the pattern can vary from a plain quadrille grid or a dot pattern to more distinct marks, such as many different small geometrical shapes in an ordered or random pattern. [0051] In the grid form pattern shown in Fig. 7, dark lines are created on an illuminated background. Alternately, if multiple sequences of camera-captured frames are to be analyzed, a moving point of light, such as, for example, a laser scan pattern, can be utilized. In addition, a momentary illumination of the entire area can provide an overall frame of reference.
[0052] Fig. 8(a) illustrates a left image 140, and Fig. 8(b) illustrates a right image 142 of a stereoscopic view of a concave volume produced by the stereoscopic camera 100, along with a distortion 144 and 146 of the grid form pattern 138 on the left and right images 140 and 142, respectively. In particular, it is noted that the distortion 144 and 146 represents a gradual horizontal displacement of the grid form pattern to the left in the left image 140, and a gradual horizontal displacement of the grid form pattern to the right in the right image 142. [0053] A variation of the second embodiment involves using a pattern generator that projects a dynamic (e.g., non-stationary) pattern, such as a raster scan onto the object (vehicle) 4 and the parking lot 5 and all else that is in the view of the cameras 100a and 100b. The cameras 100a and 100b capture the reflection of the pattern from the parking lot 5 and the object (vehicle) 4 that enables dynamic image analysis as a result of motion registered by the capture device.
[0054] Another variation of the second embodiment is to use a pattern generator that projects uniquely-identifiable patterns, such as, but not limited to, for example, letters, numbers or geometric patterns, possibly in combination with a static or dynamic featureless pattern. This prevents the mislabeling of identification of intersections in stereo pairs, that is, incorrectly correlating an intersection in a stereo pair with one in a second photo of the pair, which is actually displaced one intersection along one of the grid lines.
[0055] The operations performed by the computer 25 to determme the status of a parking space will now be described.
[0056] Images obtained from camera 100a and 100b are formatted by the frame capture device 26 to derive parameters that describe the position of the object (vehicle) 4. This data is used to form a database that is stored in either the short-term storage device 27 or the long-term storage device 28 of the computer 25. Optionally, subsequent images are then analyzed in real-time and compared to previous data for changes in order to determine the motion, and/or rate of motion and/or change of orientation of the vehicle 4. This data is used to characterize the status of the vehicle.
[0057] For example, a database for the derived parameters may be constructed using a commercially available software program called ACCESS, which is sold by Microsoft. If desired, the raw image may also be stored. One skilled in the art will recognize that any fully-featured database may be used for such storage, and retrieval, and thus, the construction and/or operation of the present invention is not to be construed to be limited to the use of Microsoft ACCESS.
[0058] Subsequent images are analyzed for changes in position, motion, rate of motion and/or change of orientation of the object. The tracking of the sequences of motion of the vehicle enables dynamic image analysis and provides further optional improvement to the algorithm. The comparison of sequential images (that are, for example, only seconds apart) of moving or standing vehicles can help identify conditions in the parking lot that due to partial obstructions may not be obvious from a static analysis. Furthermore, depending on the image capture rate, the analysis can capture the individuals walking in the parking lot and help monitor their safety or be used for other security and parking lot management puφoses. In addition, by forming a long term recording of these sequences, incidents on the parking lot can be played back to provide evidence for the parties in the form of a sequence of events of an occurrence. [0059] For example, when one vehicle drives too close to another vehicle and the door causes a dent in the second vehicle's exterior, or a walking individual is hurt by a vehicle or another individual, such events can be retrieved, step by step, from the recorded data. Thus, the present invention additionally serves as a security device. [0060] A specific software implementation of the present invention will now be described. However, it is understood that variations to the software implementation may be made without departing from the scope and/or spirit of the invention. While the following discussion is provided with respect to the installation of the present invention in one section of a parking lot, it is understood that the invention is applicable to any size or type of parking facility by duplicating the process in other segments. Further, the size or type of the parking lot monitored by the present invention may be more or less than that described below without departing from the scope and/or spirit of the invention. [0061] Fig. 9 illustrates the occupancy detection process that is executed by the present invention. Initially, an Executive Process subroutine is called at step SI 0. Once this subroutine is completed, processing proceeds to step S12 to determine whether a Configuration Process is to be performed. If the determination is affirmative, processing proceeds to step SI 4, wherein the Configuration subroutine is called. Once the Configuration subroutine is completed, processing continues at step SI 6. On the other hand, if the determination at step S12 is negative, processing proceeds from step S12 to S16.
[0062] At step SI 6, a determination is made as to whether a Calibration operation should be performed. If it is desired to calibrate the system, processing proceeds to step SI 8, wherein the Calibrate subroutine is called, after which, a System Self-test operation (step S20) is called. However, if it is determined that a system calibration is not required, processing proceeds from step S16 to step S20.
[0063] Once the System Self-test subroutine is completed, an Occupancy Algorithm subroutine (step S22) is called, before the process returns to step S10. [0064] The above processes and routines are continuously performed while the system is monitoring the parking lot.
[0065] Fig. 10 illustrates the Executive Process subroutine that is called at step S10. Initially, a Keyboard Service process is executed at step S30, which responds to operator input via a keyboard 34 (see Fig. 3) that is attached to the computer 25. Next, a Mouse Service process is executed at step S32, in order to respond to operator input from a mouse 36 (see Fig. 3). At this point, if an occupancy display has been activated, an Occupancy Display Service process is performed (step S34). This process determines whether and when additional occupancy display changes must be executed to insure that they reflect the latest parking lot condition and provide proper guidance to the drivers. [0066] Step S36 is executed when the second embodiment is used. It is understood that the first embodiment does not utilize light patterns that are projected onto the object. Thus, when this subroutine is used with the first embodiment, step S36 is deleted or bypassed (not executed). In this step, projector 136 (Fig. 6) is controlled to generate patterns of light to provide artificial features on the object when the visible features are not sufficient to determine the condition of the object. [0067] When this subroutine is complete, processing returns to the Occupancy Detection Process of Fig. 9. [0068] Fig. 11 illustrates the Configure subroutine that is called at step S14. This subroutine comprises a series of operations, some of which are performed automatically and some of which require operator input. At step S40, the capture device (such as one or more cameras) are identified, along with then coordinates (locations). It is also noted that some cameras may be designed to automatically identify themselves, while other cameras may require identification by the operator. It is noted that this operation to update system information is required only when the camera (or its wiring) is changed. [0069] Step S42 is executed to identify what video switches and capture boards are installed in the computer 25, and to control the cameras (via camera controller 26a shown in Fig. 3) and convert their video to computer usable digital form. It is noted that some cameras generate data in a digital form already compatible with computer formats and do not require such conversion. Thereafter, step S44 is executed to inform the system of which segment of the parking lots is to be monitored. Occupancy Display system parameters (step S46) to be associated with the selected parking lot segment is then set. Then, step S48 is executed to input information about the segment of the parking lot to be monitored. Processing then returns to the main routine in Fig. 9. [0070] Fig. 12 illustrates the operations that are performed when the System Self-test subroutine (step 20) is called. This subroutine begins with a Camera Synchronization operation (step S50), in which the cameras are individually tested, and then, re-tested in concert to insure that they can capture video images of monitored volume(s) with sufficient simultaneity that stereo pairs of images will yield accurate information about the monitored parking lot segment. Next, a Video Switching operation is performed (step S52) to verify that the camera video can be transferred to the computer 25. An Image Capture operation is also performed (step S54) to verify that the images of the monitored volume, as received from the cameras, are of sufficient quality to perform the tasks required of the system. The operation of the computer 25 is then verified (step S56), after which, processing returns to the routine shown in Fig. 9. [0071] The Calibrate subroutine called at step SI 8 is illustrated in Fig. 13. In the disclosed embodiments, the calibration operation is performed when the monitored parking lot segment is empty of vehicles. When a calibration is requested by the operator and verified in step S60, the system captures the lines which delineate the parking spaces in the monitored parking lot predetermined area as part of deriving the parking lot parameters. Each segment of demarcation lines between parking spaces is determined and three-dimensionally defined (step S62) and stored as part of a baseline in the database (step S64). It is noted that three-dhnensional modeling of a few selected points on the demarcation lines between parking spaces can define the entire demarcation line cluster. [0072] Height calibration is performed when initial installation is completed. When height calibration is requested by the computer operator and verified by step S66, the calibration is performed by collecting height data (step S68) of an individual of known height. The individual walks on a selected path within the monitored parking lot segment while wearing distinctive clothing that contrasts well with the parking lot's surface (e.g., a white hard-hat if the parking lot surface is black asphalt). The height analysis can be performed on dynamic images since the individual target is in motion (dynamic analysis is often considered more reliable than static analysis). In this regard, the results of the static and dynamic analyses may be superimposed (or otherwise combined, if desired). The height data is stored in the database as another part of a baseline for reference (step S70). The height calibration is set to either a predetermined duration, (e.g. two minutes) or by verbal coordination by the computer operator that instructs the height data providing individual to walk through the designated locations on the parking lot until the height is completed. [0073] The calibration data is collected to the nearest pixel of each camera sensor. The camera resolution will therefore have an impact on the accuracy of the calibration data as well as the occupancy detection process.
[0074] The operator is notified (step S72) that the calibration process is completed and the calibration data is used to update the system calibration tables. The Calibration subroutine is thus completed, and processing returns to the main program shown in Fig. 9. [0075] Fig. 14 illustrates the Occupancy Algorithm subroutine that is called at step S22. Initially, an Image Analysis subroutine (to be described below) is called at step S80. Image preprocessing methods common in the field of image processing, such as, but not limited to, for example, outlier detection and time-domain integration, are performed to reduce the effects of camera noise, artifacts, and environmental effects (e.g. glare), on subsequent processing. Edge enhancing processes common in the field of image processing, such as, but not limited to, a Canny edge detector, a Sobel detector, or a Marr- Hildreth edge operator, are performed to provide clear delineation between objects in the captured images. For clear delineation of moving objects, dynamic image analysis is utilized. Image analysis data is processed as dynamic analysis when, for example, a vehicle is stationary but wind driven tree branches cast a moving shadow on the vehicle's surface. Since the moving shadows reflected from the vehicle's surface are registered by the capture device as moving objects, they are suitable for dynamic analysis. Briefly, the image analysis subroutine creates a list for each camera, in which the list contains data of: objects and feature(s) on the monitored parking lot segment for each camera. Once the lists are created, processing resumes at step S84, where common elements (features) seen by two cameras are determined. For each camera that sees each list element, a determination is made as to whether only one camera sees the feature or whether two cameras see the feature. If only one camera sees the feature, a two-dimensional model is constructed (step S86). The two-dimensional model estimates where the feature would be on the parking lot surface, and where it would be if the vehicle was parked'at a given parking space.
[0076] However, if more than one camera sees the feature, the three-dimensional location of the feature is determined at step S88. Correlation between common features in images of more than one camera can be performed directly or by transform function (such as Fast Fourier Transform) of a feature being correlated. Other transform functions may be employed for enhanced common feature correlation without departing from the scope and/or spirit of the instant invention. It is noted that steps S84, S86 and S88 are repeated for each camera that sees the list element. It is also noted that once a predetermined number of three-dimensional correlated features of two camera images are determined to be above a predetermined occupancy threshold of a given parking space, that parking space is deemed to be occupied and no further feature analysis is required. [0077] Both the two-dimensional model and the three-dimensional model assemble the best estimate of where the vehicle is relative to the parking area surface, and where any unknown objects are relative to the parking area surface (step S90) at each parking space. Then, at step S92, the objects for which a three-dimensional model is available are tested. If the model places the object close enough to the parking lot surface to be below a predetermined occupancy threshold, an available flag is set (step S94) to set the occupancy displays. [0078] Fig. 15 illustrates the Image Analysis subroutine that is called at step S80. As previously noted, this subroutine creates a list for each camera, in which the list contains data of objects and feature(s) on the monitored parking lot segment for each camera. Specifically, step SI 20 is executed to obtain camera images in real-time (or near realtime). Three-dimensional models of the monitored object is maintained in the temporary storage device (e.g., RAM) 27 of the computer 25. Then, an operation to identify the object is initiated (step S122). In the disclosed embodiments, this is accomplished by noting features on the object 4 and determining whether they are found and are different from the referenced empty parking lot segment (as stored in the database). If they are found, the three-dimensional model is updated. However, if only one camera presently sees the object, a two-dimensional model is constructed. Note that the two-dimensional model will rarely be utilized if the camera placement ensures that each feature is observed by more than one camera. [0079] According to the above discussion, the indicating device provides an indication of the availability of at least one available parking space (that is, an indication of empty parking spaces are provided). However, it is understood that the present invention may alternatively provide an indication of which parking space(s) are occupied. Still further, the present invention may provide an indication of which parking space(s) is (are) available for parking and which parking space(s) is (are) unavailable for parking. [0080] The present invention may be utilized for parking lot management functions. These functions include, but are not limited to, for example, ensuring the proper utilization of handicapped parking spaces, the scheduling of shuttle transportation, and for determining the speed at which the vehicles travel in the parking lot. The availability of handicapped spaces may be periodically adjusted according to statistical evidence of their usage, as derived from the occupancy data (status). Shuttle transportation may be effectively scheduled based on the number of passengers recorded by the three- dimensional model (near real-time) at a shuttle stop. The scheduling may, for example, be determined based, for example, on the amount of time individual's wait at a shuttle stop. Vehicle speed control, can be determined, for example, by a dynamic image analysis of a traveled area of the parking lot. Dynamic image analysis determines the velocity of movement at each monitored location. [0081] The foregoing discussion has been provided merely for the purpose of explanation and is in no way to be construed as limiting of the present invention. While the present invention has been described with reference to exemplary embodiments, it is understood that the words which have been used herein are words of description and illustration, rather than words of limitation. Changes may be made, within the purview of the appended claims, as presently stated and as amended, without departing from the scope and spirit of the present invention in its aspects. Although the present invention has been described herein with reference to particular means, materials and embodiments, the present invention is not intended to be limited to the particulars disclosed herein; rather, the present invention extends to all functionally equivalent structures, methods and uses, such as are within the scope of the appended claims. The invention described herein comprises dedicated hardware implementations including, but not limited to, application specific integrated circuits, programmable logic arrays and other hardware devices constructed to implement the invention described herein. However, it is understood that alternative software implementations including, but not limited to, distributed processing, distributed switching, or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the invention described herein.

Claims

We claim:
1. A method for analyzing a status of at least one predetermined area of a facility, comprising: establishing a baseline by performing an identification procedure on the facility at a predetermined time; capturing an image of at least one predetermined area of the facility; producing a three-dimensional model by processing the captured image; and indicating the status of the at least one predetermined area based upon a comparison of the three-dimensional model to the baseline.
2. The method of claim 1, wherein producing a three-dimensional model further comprises processmg the captured image using at least one of a static image process and a dynamic image process.
3. The method of claim 1, wherein indicating the status comprises updating a status display.
4. The method of claim 1 , wherein capturing a synchronized image comprises capturing an image with a plurality of sensors.
5. The method of claim 1, wherein capturing a synchronized image comprises capturing an image with a sensor in conjunction with a controllable directional illuminator.
6. The method of claim 1, wherein capturing an image comprises capturing an image with at least one of a direction controlled range-finder and a three-dimensional sensor.
7. The method of claim 1, further comprising using a pattern generator to project a distinctive marking into at least one predetermined area.
8. The method of claim 1, wherein processing a captured image comprises producing a determination of at least one of a proximity and an orientation of objects in the at least one predetermined area.
9. The method of claim 1, further comprising at least one of recording the captured image and playing back the captured image.
10. An apparatus for monitoring a presence of an object in a predetermined space in a parking lot, comprising: an image capture device that captures an image representing a predetermined space in a parking lot; a processor that processes said captured image to produce a three-dimensional model of said captured image, said processor analyzmg said three-dimensional model to determine an occupancy condition corresponding to at least one of an empty parking space and an occupied parking space; and a notification device that provides a notification in accordance with said determined occupancy condition.
11. The apparatus of claim 10, wherein said captured image is processed as at least one of a static image and a dynamic image.
12. The apparatus of claim 10, further comprising a reporting device that provides at least one of a numerical report and a graphical report of a status of said predetermined space in the parking lot.
13. The apparatus of claim 10, wherein said image capture device comprises a plurality of sensors.
14. The apparatus of claim 10, wherein said image capture device comprises a sensor in conjunction with a directional illuminator.
15. The apparatus of claim 10, wherein said image capture device comprises at least one of a directional range-finder sensor and a three-dimensional sensor.
16. The apparatus of claim 10, further comprising a visual display device that provides at least one of a visual representation of the predetermined space and said notification of said occupancy condition.
17. The apparatus of claim 10, wherein said processor determines at least one of a proximity and an orientation of objects within said predetermined space.
18. The apparatus of claim 10, further comprising a recorder that at lest one of records said captured image and plays back said captured image.
19. A method for monitoring a predetermined space in a parking lot, comprising: capturing an image of a predetermined space of a parking lot; processing the captured image to produce a three-dimensional model of the captured image; analyzing the three dimensional model to determine an occupancy status of the predetermined space; and providing a notification when said occupancy status indicates an existence of an unoccupied parking space.
20. The apparatus of claim 19, further comprising providing at least one of a numerical report and a graphical report of a status of the predetermined space in accordance with said parking lot.
21. The method of claim 19, wherein capturing an image comprises capturing an image with a sensor in conjunction with a controllable directional illuminator.
22. The method of claim 19, wherein capturing an image comprises capturing an image with at least one of a directional range-finder sensor and a three-dimensional sensor.
23. The method of claim 21, wherein capturing an image comprises using a plurality of sensors to capture an image of the predetermined space.
24. The method of claim 21, further comprising utilizing the three-dimensional model to perform a parking lot management operation.
PCT/US2002/029826 2001-10-03 2002-10-01 Apparatus and method for sensing the occupancy status of parking spaces in a parking lot WO2003029046A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/490,115 US7116246B2 (en) 2001-10-03 2002-10-10 Apparatus and method for sensing the occupancy status of parking spaces in a parking lot

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US32644401P 2001-10-03 2001-10-03
US60/326,444 2001-10-03

Publications (1)

Publication Number Publication Date
WO2003029046A1 true WO2003029046A1 (en) 2003-04-10

Family

ID=23272233

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/029826 WO2003029046A1 (en) 2001-10-03 2002-10-01 Apparatus and method for sensing the occupancy status of parking spaces in a parking lot

Country Status (2)

Country Link
US (1) US7116246B2 (en)
WO (1) WO2003029046A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112370A (en) * 2014-07-30 2014-10-22 哈尔滨工业大学深圳研究生院 Monitoring image based intelligent parking lot parking place identification method and system
CN107424433A (en) * 2017-06-09 2017-12-01 成都智建新业建筑设计咨询有限公司 Intelligent underground parking lot parking position monitoring system based on BIM technology
CN107424432A (en) * 2017-06-09 2017-12-01 成都智建新业建筑设计咨询有限公司 The method monitored in real time to parking position based on BIM technology
WO2018050937A1 (en) * 2016-09-13 2018-03-22 Patino Alonso Nicolas Stereoscopic locating device for locating free parking spaces for motor vehicles
FR3057827A1 (en) * 2016-10-26 2018-04-27 Valeo Schalter Und Sensoren Gmbh OBSTACLE DETECTION SYSTEM ON A TRAFFIC CHAUSSEE
WO2019083661A1 (en) * 2017-10-24 2019-05-02 Dish Network L.L.C. Wide area parking spot identification
US10506309B2 (en) 2015-10-05 2019-12-10 Parkifi, Inc. Parking data collection systems and methods
CN111292353A (en) * 2020-01-21 2020-06-16 成都恒创新星科技有限公司 Parking state change identification method
US10847028B2 (en) 2018-08-01 2020-11-24 Parkifi, Inc. Parking sensor magnetometer calibration
US10991249B2 (en) 2018-11-30 2021-04-27 Parkifi, Inc. Radar-augmentation of parking space sensors

Families Citing this family (175)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5877897A (en) 1993-02-26 1999-03-02 Donnelly Corporation Automatic rearview mirror, vehicle lighting control and vehicle interior monitoring system using a photosensor array
US6822563B2 (en) 1997-09-22 2004-11-23 Donnelly Corporation Vehicle imaging system with accessory control
US6891563B2 (en) 1996-05-22 2005-05-10 Donnelly Corporation Vehicular vision system
US7655894B2 (en) 1996-03-25 2010-02-02 Donnelly Corporation Vehicular image sensing system
US7176440B2 (en) * 2001-01-19 2007-02-13 Honeywell International Inc. Method and apparatus for detecting objects using structured light patterns
US7697027B2 (en) 2001-07-31 2010-04-13 Donnelly Corporation Vehicular video system
US6882287B2 (en) 2001-07-31 2005-04-19 Donnelly Corporation Automotive lane change aid
ES2391556T3 (en) 2002-05-03 2012-11-27 Donnelly Corporation Object detection system for vehicles
DE10257722A1 (en) * 2002-12-11 2004-07-01 Robert Bosch Gmbh parking aid
JP3700707B2 (en) * 2003-03-13 2005-09-28 コニカミノルタホールディングス株式会社 Measuring system
US7308341B2 (en) 2003-10-14 2007-12-11 Donnelly Corporation Vehicle communication system
US7526103B2 (en) 2004-04-15 2009-04-28 Donnelly Corporation Imaging system for vehicle
JP4989464B2 (en) * 2004-05-12 2012-08-01 レイセオン カンパニー Event warning system and method
DE102004028763A1 (en) * 2004-06-16 2006-01-19 Daimlerchrysler Ag Andockassistent
US7881496B2 (en) * 2004-09-30 2011-02-01 Donnelly Corporation Vision system for vehicle
US7620209B2 (en) * 2004-10-14 2009-11-17 Stevick Glen R Method and apparatus for dynamic space-time imaging system
JP4604703B2 (en) * 2004-12-21 2011-01-05 アイシン精機株式会社 Parking assistance device
US7720580B2 (en) 2004-12-23 2010-05-18 Donnelly Corporation Object detection system for vehicle
US7355527B2 (en) * 2005-01-10 2008-04-08 William Franklin System and method for parking infraction detection
TWI275308B (en) * 2005-08-15 2007-03-01 Compal Electronics Inc Method and apparatus for adjusting output images
US7834778B2 (en) 2005-08-19 2010-11-16 Gm Global Technology Operations, Inc. Parking space locator
US20070085067A1 (en) * 2005-10-18 2007-04-19 Lewis John R Gated parking corral
US8242476B2 (en) * 2005-12-19 2012-08-14 Leddartech Inc. LED object detection system and method combining complete reflection traces from individual narrow field-of-view channels
WO2008154736A1 (en) 2007-06-18 2008-12-24 Leddartech Inc. Lighting system with driver assistance capabilities
US7538690B1 (en) * 2006-01-27 2009-05-26 Navteq North America, Llc Method of collecting parking availability information for a geographic database for use with a navigation system
US7516010B1 (en) 2006-01-27 2009-04-07 Navteg North America, Llc Method of operating a navigation system to provide parking availability information
US20070294147A1 (en) * 2006-06-09 2007-12-20 International Business Machines Corporation Time Monitoring System
WO2008024639A2 (en) 2006-08-11 2008-02-28 Donnelly Corporation Automatic headlamp control system
US7720260B2 (en) * 2006-09-13 2010-05-18 Ford Motor Company Object detection system and method
US20080177571A1 (en) * 2006-10-16 2008-07-24 Rooney James H System and method for public health surveillance and response
US8139115B2 (en) * 2006-10-30 2012-03-20 International Business Machines Corporation Method and apparatus for managing parking lots
US20080112610A1 (en) * 2006-11-14 2008-05-15 S2, Inc. System and method for 3d model generation
US8013780B2 (en) 2007-01-25 2011-09-06 Magna Electronics Inc. Radar sensing system for vehicle
JP5146446B2 (en) * 2007-03-22 2013-02-20 日本電気株式会社 MOBILE BODY DETECTION DEVICE, MOBILE BODY DETECTING PROGRAM, AND MOBILE BODY DETECTING METHOD
CA2691141C (en) 2007-06-18 2013-11-26 Leddartech Inc. Lighting system with traffic management capabilities
US7914187B2 (en) 2007-07-12 2011-03-29 Magna Electronics Inc. Automatic lighting system with adaptive alignment function
US8017898B2 (en) 2007-08-17 2011-09-13 Magna Electronics Inc. Vehicular imaging system in an automatic headlamp control system
US8451107B2 (en) 2007-09-11 2013-05-28 Magna Electronics, Inc. Imaging system for vehicle
WO2009046268A1 (en) 2007-10-04 2009-04-09 Magna Electronics Combined rgb and ir imaging sensor
US8723689B2 (en) 2007-12-21 2014-05-13 Leddartech Inc. Parking management system and method using lighting system
CA2857826C (en) 2007-12-21 2015-03-17 Leddartech Inc. Detection and ranging methods and systems
US20090179776A1 (en) * 2008-01-15 2009-07-16 Johnny Holden Determination of parking space availability systems and methods
US8259163B2 (en) * 2008-03-07 2012-09-04 Intellectual Ventures Holding 67 Llc Display with built in 3D sensing
US8489353B2 (en) * 2009-01-13 2013-07-16 GM Global Technology Operations LLC Methods and systems for calibrating vehicle vision systems
EP2401176B1 (en) 2009-02-27 2019-05-08 Magna Electronics Alert system for vehicle
US8376595B2 (en) 2009-05-15 2013-02-19 Magna Electronics, Inc. Automatic headlamp control
US9479768B2 (en) * 2009-06-09 2016-10-25 Bartholomew Garibaldi Yukich Systems and methods for creating three-dimensional image media
WO2011014497A1 (en) 2009-07-27 2011-02-03 Magna Electronics Inc. Vehicular camera with on-board microcontroller
US8874317B2 (en) 2009-07-27 2014-10-28 Magna Electronics Inc. Parking assist system
EP2473871B1 (en) 2009-09-01 2015-03-11 Magna Mirrors Of America, Inc. Imaging and display system for vehicle
EP2306427A1 (en) * 2009-10-01 2011-04-06 Kapsch TrafficCom AG Device and method for determining the direction, speed and/or distance of vehicles
SI2306429T1 (en) * 2009-10-01 2012-07-31 Kapsch Trafficcom Ag Device and method for determining the direction, speed and/or distance of vehicles
EP2517189B1 (en) 2009-12-22 2014-03-19 Leddartech Inc. Active 3d monitoring system for traffic detection
JP5763297B2 (en) * 2010-01-25 2015-08-12 京セラ株式会社 Portable electronic devices
US8890955B2 (en) 2010-02-10 2014-11-18 Magna Mirrors Of America, Inc. Adaptable wireless vehicle vision system based on wireless communication error
US8306734B2 (en) * 2010-03-12 2012-11-06 Telenav, Inc. Navigation system with parking space locator mechanism and method of operation thereof
US9117123B2 (en) 2010-07-05 2015-08-25 Magna Electronics Inc. Vehicular rear view camera display system with lifecheck function
KR101607419B1 (en) * 2010-08-27 2016-03-29 인텔 코포레이션 Remote control device
US8766818B2 (en) 2010-11-09 2014-07-01 International Business Machines Corporation Smart spacing allocation
DE112011103834T8 (en) 2010-11-19 2013-09-12 Magna Electronics, Inc. Lane departure warning and lane centering
WO2012075250A1 (en) 2010-12-01 2012-06-07 Magna Electronics Inc. System and method of establishing a multi-camera image using pixel remapping
US9264672B2 (en) 2010-12-22 2016-02-16 Magna Mirrors Of America, Inc. Vision display system for vehicle
US9085261B2 (en) 2011-01-26 2015-07-21 Magna Electronics Inc. Rear vision system with trailer angle detection
ES2425778T3 (en) * 2011-03-17 2013-10-17 Kapsch Trafficcom Ag Parking with reservation system
US9194943B2 (en) 2011-04-12 2015-11-24 Magna Electronics Inc. Step filter for estimating distance in a time-of-flight ranging system
WO2012145818A1 (en) 2011-04-25 2012-11-01 Magna International Inc. Method and system for dynamically calibrating vehicular cameras
US9834153B2 (en) 2011-04-25 2017-12-05 Magna Electronics Inc. Method and system for dynamically calibrating vehicular cameras
WO2012145819A1 (en) 2011-04-25 2012-11-01 Magna International Inc. Image processing method for detecting objects using relative motion
US8908159B2 (en) 2011-05-11 2014-12-09 Leddartech Inc. Multiple-field-of-view scannerless optical rangefinder in high ambient background light
US8831287B2 (en) * 2011-06-09 2014-09-09 Utah State University Systems and methods for sensing occupancy
WO2012172526A1 (en) 2011-06-17 2012-12-20 Leddartech Inc. System and method for traffic side detection and characterization
US10793067B2 (en) 2011-07-26 2020-10-06 Magna Electronics Inc. Imaging system for vehicle
WO2013019707A1 (en) 2011-08-01 2013-02-07 Magna Electronics Inc. Vehicle camera alignment system
US20140218535A1 (en) 2011-09-21 2014-08-07 Magna Electronics Inc. Vehicle vision system using image data transmission and power supply via a coaxial cable
US9681062B2 (en) 2011-09-26 2017-06-13 Magna Electronics Inc. Vehicle camera image quality improvement in poor visibility conditions by contrast amplification
KR101841750B1 (en) * 2011-10-11 2018-03-26 한국전자통신연구원 Apparatus and Method for correcting 3D contents by using matching information among images
US9146898B2 (en) 2011-10-27 2015-09-29 Magna Electronics Inc. Driver assist system with algorithm switching
US9491451B2 (en) 2011-11-15 2016-11-08 Magna Electronics Inc. Calibration system and method for vehicular surround vision system
US10071687B2 (en) 2011-11-28 2018-09-11 Magna Electronics Inc. Vision system for vehicle
WO2013086249A2 (en) 2011-12-09 2013-06-13 Magna Electronics, Inc. Vehicle vision system with customized display
KR20130066829A (en) * 2011-12-13 2013-06-21 한국전자통신연구원 Parking lot management system based on cooperation of intelligence cameras
WO2013126715A2 (en) 2012-02-22 2013-08-29 Magna Electronics, Inc. Vehicle camera system with image manipulation
US10457209B2 (en) 2012-02-22 2019-10-29 Magna Electronics Inc. Vehicle vision system with multi-paned view
US8694224B2 (en) 2012-03-01 2014-04-08 Magna Electronics Inc. Vehicle yaw rate correction
WO2013128427A1 (en) 2012-03-02 2013-09-06 Leddartech Inc. System and method for multipurpose traffic detection and characterization
US10609335B2 (en) 2012-03-23 2020-03-31 Magna Electronics Inc. Vehicle vision system with accelerated object confirmation
US9319637B2 (en) 2012-03-27 2016-04-19 Magna Electronics Inc. Vehicle vision system with lens pollution detection
US9129524B2 (en) 2012-03-29 2015-09-08 Xerox Corporation Method of determining parking lot occupancy from digital camera images
US9070093B2 (en) 2012-04-03 2015-06-30 Xerox Corporation System and method for generating an occupancy model
WO2013158592A2 (en) 2012-04-16 2013-10-24 Magna Electronics, Inc. Vehicle vision system with reduced image color data processing by use of dithering
US10089537B2 (en) 2012-05-18 2018-10-02 Magna Electronics Inc. Vehicle vision system with front and rear camera integration
US9171382B2 (en) 2012-08-06 2015-10-27 Cloudparc, Inc. Tracking speeding violations and controlling use of parking spaces using cameras
US8698895B2 (en) 2012-08-06 2014-04-15 Cloudparc, Inc. Controlling use of parking spaces using multiple cameras
US9489839B2 (en) 2012-08-06 2016-11-08 Cloudparc, Inc. Tracking a vehicle using an unmanned aerial vehicle
US9340227B2 (en) 2012-08-14 2016-05-17 Magna Electronics Inc. Vehicle lane keep assist system
DE102013217430A1 (en) 2012-09-04 2014-03-06 Magna Electronics, Inc. Driver assistance system for a motor vehicle
US9446713B2 (en) 2012-09-26 2016-09-20 Magna Electronics Inc. Trailer angle detection system
US9558409B2 (en) 2012-09-26 2017-01-31 Magna Electronics Inc. Vehicle vision system with trailer angle detection
US9723272B2 (en) 2012-10-05 2017-08-01 Magna Electronics Inc. Multi-camera image stitching calibration system
US9707896B2 (en) 2012-10-15 2017-07-18 Magna Electronics Inc. Vehicle camera lens dirt protection via air flow
US9090234B2 (en) 2012-11-19 2015-07-28 Magna Electronics Inc. Braking control system for vehicle
US9743002B2 (en) 2012-11-19 2017-08-22 Magna Electronics Inc. Vehicle vision system with enhanced display functions
US10025994B2 (en) 2012-12-04 2018-07-17 Magna Electronics Inc. Vehicle vision system utilizing corner detection
JP2014110028A (en) * 2012-12-04 2014-06-12 Sony Corp Image processing apparatus, image processing method, and program
US9481301B2 (en) 2012-12-05 2016-11-01 Magna Electronics Inc. Vehicle vision system utilizing camera synchronization
US9091628B2 (en) 2012-12-21 2015-07-28 L-3 Communications Security And Detection Systems, Inc. 3D mapping with two orthogonal imaging views
US20140218529A1 (en) 2013-02-04 2014-08-07 Magna Electronics Inc. Vehicle data recording system
US9092986B2 (en) 2013-02-04 2015-07-28 Magna Electronics Inc. Vehicular vision system
US9445057B2 (en) 2013-02-20 2016-09-13 Magna Electronics Inc. Vehicle vision system with dirt detection
US10179543B2 (en) 2013-02-27 2019-01-15 Magna Electronics Inc. Multi-camera dynamic top view vision system
US9688200B2 (en) 2013-03-04 2017-06-27 Magna Electronics Inc. Calibration system and method for multi-camera vision system
US10027930B2 (en) 2013-03-29 2018-07-17 Magna Electronics Inc. Spectral filtering for vehicular driver assistance systems
US9327693B2 (en) 2013-04-10 2016-05-03 Magna Electronics Inc. Rear collision avoidance system for vehicle
US10232797B2 (en) 2013-04-29 2019-03-19 Magna Electronics Inc. Rear vision system for vehicle with dual purpose signal lines
US9508014B2 (en) 2013-05-06 2016-11-29 Magna Electronics Inc. Vehicular multi-camera vision system
EP2801958B1 (en) * 2013-05-08 2016-09-14 Axis AB Monitoring method and camera
US9563951B2 (en) 2013-05-21 2017-02-07 Magna Electronics Inc. Vehicle vision system with targetless camera calibration
US9205776B2 (en) 2013-05-21 2015-12-08 Magna Electronics Inc. Vehicle vision system using kinematic model of vehicle motion
US9262921B2 (en) 2013-05-21 2016-02-16 Xerox Corporation Route computation for navigation system using data exchanged with ticket vending machines
US10567705B2 (en) 2013-06-10 2020-02-18 Magna Electronics Inc. Coaxial cable with bidirectional data transmission
US9260095B2 (en) 2013-06-19 2016-02-16 Magna Electronics Inc. Vehicle vision system with collision mitigation
US20140375476A1 (en) 2013-06-24 2014-12-25 Magna Electronics Inc. Vehicle alert system
US10326969B2 (en) 2013-08-12 2019-06-18 Magna Electronics Inc. Vehicle vision system with reduction of temporal noise in images
US9619716B2 (en) 2013-08-12 2017-04-11 Magna Electronics Inc. Vehicle vision system with image classification
US9323993B2 (en) 2013-09-05 2016-04-26 Xerox Corporation On-street parking management methods and systems for identifying a vehicle via a camera and mobile communications devices
US20150086071A1 (en) * 2013-09-20 2015-03-26 Xerox Corporation Methods and systems for efficiently monitoring parking occupancy
US9953464B2 (en) 2013-09-26 2018-04-24 Conduent Business Services, Llc Portable occupancy detection methods, systems and processor-readable media
US8923565B1 (en) * 2013-09-26 2014-12-30 Chengdu Haicun Ip Technology Llc Parked vehicle detection based on edge detection
US9275297B2 (en) * 2013-10-14 2016-03-01 Digitalglobe, Inc. Techniques for identifying parking lots in remotely-sensed images by identifying parking rows
US9330568B2 (en) * 2013-10-30 2016-05-03 Xerox Corporation Methods, systems and processor-readable media for parking occupancy detection utilizing laser scanning
US9499139B2 (en) 2013-12-05 2016-11-22 Magna Electronics Inc. Vehicle monitoring system
US9988047B2 (en) 2013-12-12 2018-06-05 Magna Electronics Inc. Vehicle control system with traffic driving control
US10160382B2 (en) 2014-02-04 2018-12-25 Magna Electronics Inc. Trailer backup assist system
US9623878B2 (en) 2014-04-02 2017-04-18 Magna Electronics Inc. Personalized driver assistance system for vehicle
US9487235B2 (en) 2014-04-10 2016-11-08 Magna Electronics Inc. Vehicle control system with adaptive wheel angle correction
US10328932B2 (en) 2014-06-02 2019-06-25 Magna Electronics Inc. Parking assist system with annotated map generation
CN107003406B (en) 2014-09-09 2019-11-05 莱达科技股份有限公司 The discretization of detection zone
US9925980B2 (en) 2014-09-17 2018-03-27 Magna Electronics Inc. Vehicle collision avoidance system with enhanced pedestrian avoidance
US9916660B2 (en) 2015-01-16 2018-03-13 Magna Electronics Inc. Vehicle vision system with calibration algorithm
NL2014154B1 (en) * 2015-01-19 2017-01-05 Lumi Guide Fietsdetectie Holding B V System and method for detecting the occupancy of a spatial volume.
US9764744B2 (en) 2015-02-25 2017-09-19 Magna Electronics Inc. Vehicle yaw rate estimation system
US10286855B2 (en) 2015-03-23 2019-05-14 Magna Electronics Inc. Vehicle vision system with video compression
US10946799B2 (en) 2015-04-21 2021-03-16 Magna Electronics Inc. Vehicle vision system with overlay calibration
IL238473A0 (en) * 2015-04-26 2015-11-30 Parkam Israel Ltd A method and system for detecting and mapping parking spaces
US10819943B2 (en) 2015-05-07 2020-10-27 Magna Electronics Inc. Vehicle vision system with incident recording function
US10214206B2 (en) 2015-07-13 2019-02-26 Magna Electronics Inc. Parking assist system for vehicle
US10078789B2 (en) 2015-07-17 2018-09-18 Magna Electronics Inc. Vehicle parking assist system with vision-based parking space detection
US20170025008A1 (en) * 2015-07-20 2017-01-26 Dura Operating, Llc Communication system and method for communicating the availability of a parking space
US10086870B2 (en) 2015-08-18 2018-10-02 Magna Electronics Inc. Trailer parking assist system for vehicle
US10187590B2 (en) 2015-10-27 2019-01-22 Magna Electronics Inc. Multi-camera vehicle vision system with image gap fill
US10144419B2 (en) 2015-11-23 2018-12-04 Magna Electronics Inc. Vehicle dynamic control system for emergency handling
US20170161961A1 (en) * 2015-12-07 2017-06-08 Paul Salsberg Parking space control method and system with unmanned paired aerial vehicle (uav)
US20170186317A1 (en) * 2015-12-29 2017-06-29 Tannery Creek Systems Inc. System and Method for Determining Parking Infraction
US11277558B2 (en) 2016-02-01 2022-03-15 Magna Electronics Inc. Vehicle vision system with master-slave camera configuration
US11433809B2 (en) 2016-02-02 2022-09-06 Magna Electronics Inc. Vehicle vision system with smart camera video output
US10160437B2 (en) 2016-02-29 2018-12-25 Magna Electronics Inc. Vehicle control system with reverse assist
US20170253237A1 (en) 2016-03-02 2017-09-07 Magna Electronics Inc. Vehicle vision system with automatic parking function
US10055651B2 (en) 2016-03-08 2018-08-21 Magna Electronics Inc. Vehicle vision system with enhanced lane tracking
US10789730B2 (en) * 2016-03-18 2020-09-29 Teknologian Tutkimuskeskus Vtt Oy Method and apparatus for monitoring a position
US9927253B2 (en) * 2016-05-11 2018-03-27 GE Lighting Solutions, LLC System and stereoscopic range determination method for a roadway lighting system
US10300859B2 (en) 2016-06-10 2019-05-28 Magna Electronics Inc. Multi-sensor interior mirror device with image adjustment
US10368036B2 (en) * 2016-11-17 2019-07-30 Vivotek Inc. Pair of parking area sensing cameras, a parking area sensing method and a parking area sensing system
DE102016223171A1 (en) * 2016-11-23 2018-05-24 Robert Bosch Gmbh Method and system for detecting a raised object located within a parking lot
DE102017212513A1 (en) * 2017-07-19 2019-01-24 Robert Bosch Gmbh Method and system for detecting a free area within a parking lot
DE102017212379A1 (en) * 2017-07-19 2019-01-24 Robert Bosch Gmbh Method and system for detecting a free area within a parking lot
JP6958117B2 (en) * 2017-08-29 2021-11-02 株式会社アイシン Parking support device
GB2568752B (en) * 2017-11-28 2020-12-30 Jaguar Land Rover Ltd Vehicle position identification method and apparatus
TWI651697B (en) * 2018-01-24 2019-02-21 National Chung Cheng University Parking space vacancy detection method and detection model establishment method thereof
US11288624B2 (en) * 2018-08-09 2022-03-29 Blackberry Limited Method and system for yard asset management
JP7178296B2 (en) * 2019-03-07 2022-11-25 本田技研工業株式会社 Operation system of snow removal device and operation method of snow removal device
US11748636B2 (en) 2019-11-04 2023-09-05 International Business Machines Corporation Parking spot locator based on personalized predictive analytics
US10957198B1 (en) * 2019-12-23 2021-03-23 Industrial Technology Research Institute System and method for determining parking space
CN112509360A (en) * 2020-11-05 2021-03-16 南京市德赛西威汽车电子有限公司 Parking lot parking space information calibration method, management system and parking lot
US11968639B2 (en) 2020-11-11 2024-04-23 Magna Electronics Inc. Vehicular control system with synchronized communication between control units
US11798273B2 (en) * 2021-03-12 2023-10-24 Lawrence Livermore National Security, Llc Model-based image change quantification

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5910817A (en) * 1995-05-18 1999-06-08 Omron Corporation Object observing method and device
US6107942A (en) * 1999-02-03 2000-08-22 Premier Management Partners, Inc. Parking guidance and management system
US6285297B1 (en) * 1999-05-03 2001-09-04 Jay H. Ball Determining the availability of parking spaces
US6340935B1 (en) * 1999-02-05 2002-01-22 Brett O. Hall Computerized parking facility management system
US6426708B1 (en) * 2001-06-30 2002-07-30 Koninklijke Philips Electronics N.V. Smart parking advisor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3108763B2 (en) * 1998-11-17 2000-11-13 工業技術院長 Chitooligosaccharide derivatives

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5910817A (en) * 1995-05-18 1999-06-08 Omron Corporation Object observing method and device
US6107942A (en) * 1999-02-03 2000-08-22 Premier Management Partners, Inc. Parking guidance and management system
US6340935B1 (en) * 1999-02-05 2002-01-22 Brett O. Hall Computerized parking facility management system
US6285297B1 (en) * 1999-05-03 2001-09-04 Jay H. Ball Determining the availability of parking spaces
US6426708B1 (en) * 2001-06-30 2002-07-30 Koninklijke Philips Electronics N.V. Smart parking advisor

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112370A (en) * 2014-07-30 2014-10-22 哈尔滨工业大学深圳研究生院 Monitoring image based intelligent parking lot parking place identification method and system
US10506309B2 (en) 2015-10-05 2019-12-10 Parkifi, Inc. Parking data collection systems and methods
WO2018050937A1 (en) * 2016-09-13 2018-03-22 Patino Alonso Nicolas Stereoscopic locating device for locating free parking spaces for motor vehicles
FR3057827A1 (en) * 2016-10-26 2018-04-27 Valeo Schalter Und Sensoren Gmbh OBSTACLE DETECTION SYSTEM ON A TRAFFIC CHAUSSEE
CN107424432A (en) * 2017-06-09 2017-12-01 成都智建新业建筑设计咨询有限公司 The method monitored in real time to parking position based on BIM technology
CN107424433A (en) * 2017-06-09 2017-12-01 成都智建新业建筑设计咨询有限公司 Intelligent underground parking lot parking position monitoring system based on BIM technology
WO2019083661A1 (en) * 2017-10-24 2019-05-02 Dish Network L.L.C. Wide area parking spot identification
US10691954B2 (en) 2017-10-24 2020-06-23 DISK Network L.L.C. Wide area parking spot identification
US10847028B2 (en) 2018-08-01 2020-11-24 Parkifi, Inc. Parking sensor magnetometer calibration
US11315416B2 (en) 2018-08-01 2022-04-26 Parkifi, Inc. Parking sensor magnetometer calibration
US10991249B2 (en) 2018-11-30 2021-04-27 Parkifi, Inc. Radar-augmentation of parking space sensors
US11322028B2 (en) 2018-11-30 2022-05-03 Parkifi, Inc. Radar-augmentation of parking space sensors
CN111292353A (en) * 2020-01-21 2020-06-16 成都恒创新星科技有限公司 Parking state change identification method
CN111292353B (en) * 2020-01-21 2023-12-19 成都恒创新星科技有限公司 Parking state change identification method

Also Published As

Publication number Publication date
US20050002544A1 (en) 2005-01-06
US7116246B2 (en) 2006-10-03

Similar Documents

Publication Publication Date Title
US7116246B2 (en) Apparatus and method for sensing the occupancy status of parking spaces in a parking lot
US11513212B2 (en) Motor vehicle and method for a 360° detection of the motor vehicle surroundings
CA2907047C (en) Method for generating a panoramic image
EP0747870B1 (en) An object observing method and device with two or more cameras
JP3497929B2 (en) Intruder monitoring device
JP4287647B2 (en) Environmental status monitoring device
US20040125207A1 (en) Robust stereo-driven video-based surveillance
JP4363295B2 (en) Plane estimation method using stereo images
US11671574B2 (en) Information processing apparatus, image capture apparatus, image processing system, and method of processing a plurality of captured images of a traveling surface where a moveable apparatus travels
JP4691701B2 (en) Number detection device and method
JPH09506454A (en) Method and apparatus for machine vision classification and tracking
KR101496390B1 (en) System for Vehicle Number Detection
JP6551623B1 (en) Information processing apparatus, moving body, image processing system, and information processing method
JP3456339B2 (en) Object observation method, object observation device using the method, traffic flow measurement device and parking lot observation device using the device
CN104917957A (en) Apparatus for controlling imaging of camera and system provided with the apparatus
JPH1144533A (en) Preceding vehicle detector
JP3800842B2 (en) Method and apparatus for measuring three-dimensional shape, and storage medium storing three-dimensional shape measuring program
JP3629935B2 (en) Speed measurement method for moving body and speed measurement device using the method
JP4144300B2 (en) Plane estimation method and object detection apparatus using stereo image
JP3011748B2 (en) Mobile counting device
KR100541865B1 (en) Vehicle Position Tracking System by using Stereo Vision
JPH10283478A (en) Method for extracting feature and and device for recognizing object using the same method
JP3740836B2 (en) Three-dimensional shape measuring device
JP2000231637A (en) Image monitor device
US7453080B2 (en) System for locating a physical alteration in a structure and a method thereof

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BY BZ CA CH CN CO CR CU CZ DE DM DZ EC EE ES FI GB GD GE GH HR HU ID IL IN IS JP KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NO NZ OM PH PL PT RU SD SE SG SI SK SL TJ TM TN TR TZ UA UG US UZ VN YU ZA ZM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ UG ZM ZW AM AZ BY KG KZ RU TJ TM AT BE BG CH CY CZ DK EE ES FI FR GB GR IE IT LU MC PT SE SK TR BF BJ CF CG CI GA GN GQ GW ML MR NE SN TD TG US

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 10490115

Country of ref document: US

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP