WO2015080557A1 - Video surveillance system and method - Google Patents

Video surveillance system and method Download PDF

Info

Publication number
WO2015080557A1
WO2015080557A1 PCT/MY2014/000137 MY2014000137W WO2015080557A1 WO 2015080557 A1 WO2015080557 A1 WO 2015080557A1 MY 2014000137 W MY2014000137 W MY 2014000137W WO 2015080557 A1 WO2015080557 A1 WO 2015080557A1
Authority
WO
WIPO (PCT)
Prior art keywords
event
camera
client device
stream
live video
Prior art date
Application number
PCT/MY2014/000137
Other languages
French (fr)
Inventor
Seh Chun NG
Heng Tze Chieng
Kee Ngoh Ting
Khong Neng Choong
Original Assignee
Mimos Berhad
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimos Berhad filed Critical Mimos Berhad
Publication of WO2015080557A1 publication Critical patent/WO2015080557A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19654Details concerning communication with a camera
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19691Signalling events for better perception by user, e.g. indicating alarms by making display brighter, adding text, creating a sound
    • G08B13/19693Signalling events for better perception by user, e.g. indicating alarms by making display brighter, adding text, creating a sound using multiple video sources viewed on a single or compound screen

Abstract

Disclosed herein is a video surveillance system that analyzes an event captured by at least one camera (100), and intelligently pairs the at least one camera (100) with at least one client device (200) and therefore allowing the at least one client device (200) to stream at least one live video from the at least one camera (100). The pairing is based on a number of factors comprising event type, event conditions, event severity, degree of informative contents captured by the at least one camera (100), location of the at least one client device (200), how the at least one client device (200) is registered to the device manager (300), network conditions, or a combination thereof. The system is also able to intelligently couple the at least one live video stream with useful contextual information based on the analyzed results. A video surveillance method for the same is also disclosed herein.

Description

VIDEO SURVEILLANCE SYSTEM AND METHOD
TECHNICAL FIELD OF THE INVENTION
The present invention relates generally to a video surveillance system and a method for the same, and more particularly to an intelligent video surveillance system and a method for the same.
BACKGROUND OF THE INVENTION
Surveillance system has been widely implemented nowadays for the purpose of monitoring activities and events that may or may not involve people. It is normally utilized to allow end users to justify and take appropriate actions based on the activities and events. For instance, surveillance system allows law enforcers to identify a criminal activity such as burglary activity, and take appropriate actions in order to prevent the criminal activity or to capture the criminals that are responsible for the criminal activity.
There are many ways to do so, and one of the most common ways is by means of imagery/electronic equipment such as closed-circuit television (CCTV), cameras, video cameras, etc. Surveillance system that uses such imagery/electronic equipment is normally termed video surveillance system, and is connected to a recording device or a wired or wireless IP network. This particularly system normally requires handling and management by human personnel, which is the cause for a number of disadvantages.
For one disadvantage, a security guard may not be swift enough to analyze a particular event, and therefore would not be able to take appropriate actions such as informing a right party, or streaming a live video stream to a right party. For another disadvantage, an event may occur in a very short time, allowing no time for a security guard to fully comprehend and analyse the short event, thus once again appropriate actions could not be taken. Additionally, as such video surveillance system involves human personnel, human errors are bound to happen. For instance, a security guard may misinterpret an event, and therefore a wrong party is being informed, or a live video stream is forwarded to a wrong party. For another instance, a security guard may miss an occurrence of an event, and therefore is totally unaware of it, resulting in no action being taken at all.
Apart from the above, human personnel would not be able to provide accurate contextual information to end users of video surveillance system. In addition to the video footages and images conveyed to end users, contextual information is additional information that is equally important and useful for the end users to make a better decision, and take the most appropriate action.
Therefore, it has become an aim of the present invention to introduce a video surveillance system and a video surveillance method that overcome the discussed disadvantages, specifically, a video surveillance system and a video surveillance method that analyze an event, intelligently and accurately pair and stream live video streams from cameras to client devices, and coupling the live video streams with useful and accurate contextual information, all without human intervention.
SUMMARY OF THE INVENTION
The first technical aspect of the present invention is directed to a video surveillance system. The system is able to analyze an event captured by at least one camera, and thereafter intelligently pair the at least one camera with at least one client device based on several factors. The pairing allows the at least one client device to stream at least one live video stream from the at least one camera. In addition to the above, the system is also capable of coupling the at least one live video stream with useful contextual information based on the analyzed results.
According to an embodiment of the present invention, the video surveillance system comprises at least one camera that is continuously monitoring an area; at least one client device that is in standby mode to be paired with and stream at least one live video stream from the at least one camera; a device manager for managing the at least one camera and the at least one client device; and a database for storing information of the at least one camera and information of the at least one client device; characterized in that the system further comprises an event detection means for detecting an occurrence of an event via the at least one camera; an event analyzing means for extracting and analyzing information of the event; a stream director means for determining pairing configurations of the at least one camera and the at least one client device based on a number of factors comprising event type, event conditions, event severity, degree of informative contents captured by the at least one camera, location of the at least one client device, how the at least one client device is registered to the device manager, network conditions, or a combination thereof, wherein the pairing configurations can be one camera to one client device, one camera to many client devices, many cameras to one client device, or many cameras to many client devices; an event prioritizing means for determining a priority score for the event based on event type, event conditions, event severity, network conditions, number of active live video streams, priority score of active live video streams, number of new live video streams, or a combination thereof; and a stream enforcer means for pairing the at least one camera and the at least one client device according to the pairing configurations determined by the stream director means and the priority score determined by the event prioritizing means, thus allowing the at least one client device to stream the at least one video stream from the at least one camera, and coupling the at least one live video stream with contextual information based on the analyzed results produced by the event analyzing means.
The second technical aspect of the present invention is directed to a video surveillance method. The method allows an event captured by at least one camera to be analyzed, and thereafter intelligently pairs the at least one camera with at least one client device based on several factors. The pairing allows the at least one client device to stream at least one live video stream from the at least one camera. In addition to the above, the method allows the at least one video stream to be coupled with useful contextual information based on the analyzed results.
According to an embodiment of the present invention, the video surveillance method comprises the steps of having at least one camera to continuously monitor an area; having at least one client device in standby mode to be paired with and stream at least one live video stream from the at least one camera; managing the at least one camera and at least one client device; and storing information of the at least one camera and information of the at least one client device in a database; characterized in that the method further comprises the steps of detecting an occurrence of an event via the at least one camera; extracting and analyzing information of the event; determining pairing configurations of the at least one camera and the at least one client device based on a number of factors comprising event type, event conditions, event severity, degree of informative contents captured by the at least one camera, location of the at least one client device, how the at least one client device is registered to the device manager, network conditions, or a combination thereof, wherein the pairing configurations can be one camera to one client device, one camera to many client devices, many cameras to one client device, or many cameras to many client devices; determining a priority score for the event based on event type, event conditions, event severity, network conditions, number of active live video streams, priority score of active live video streams, number of new live video streams, or a combination thereof; pairing the at least one camera and the at least one client device according to the determined pairing configurations and the determined priority score; streaming the at least one video stream from the at least one camera to the at least one client device; and coupling the at least one live video stream with contextual information based on the analyzed results.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 illustrates the video surveillance system of the present invention and the involved elements; Figure 2 illustrates an example of graphical user interface (GUI) of a client device; and Figure 3 is a flow chart depicting the video surveillance method of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The above mentioned and other features and objects of this invention will become more apparent and better understood by reference to the following detailed description. It should be understood that the detailed description made known below is not intended to be exhaustive or limit the invention to the precise disclosed form as the invention may assume various alternative forms. On the contrary, the detailed description covers all the relevant modifications and alterations made to the present invention, unless the claims expressly state otherwise.
The term "event" used herein means dynamic events, which may comprise fire event, flood event, burglary event, thievery event, traffic accident event, traffic violation event, and other emergency or non-emergency events that are determined by an administrator of the system and method of the present invention. The term "camera" used herein means video cameras, closed-circuit television (CCTV) cameras, and other cameras that have the capability to capture and record images or videos.
The term "client device" used herein means mobile or non-mobile devices that have image viewing capability, or live or non-live video playback capability.
The present invention comprises two fundamental technical aspects, which will be fully described herein with the aid of accompanying figures. The first fundamental technical aspect relates to a video surveillance system, whereas the second fundamental technical aspect relates to a video surveillance method.
We now refer to the first fundamental technical aspect and Figure 1. The video surveillance system of the present invention is a system that enables an event captured by at least one camera (100) to be thoroughly analysed, intelligently pairs the at least one camera (100) with at least one client device (200) via wired or wireless network based on several factors therefore allowing streaming of at least one live video stream from the at least one camera (100) to the at least one client device (200), and couples the at least one live video stream with contextual information (900) based on the analysed results.
The aforesaid system comprises at least one camera (100); at least one client device (200); a device manager (300); a database (400); an event detection means (501); an event analyzing means (503); a stream director means (502); an event prioritizing means (504); and a stream enforcer means (506).
All of the aforesaid elements will now be elaborated in details. The at least one camera (100) of the present system is always in operation. In other words, the at least one camera (100) is always capturing and recording images or videos, continuously monitoring an area.
The at least one client device (200) of the present system is always in standby mode. In other words, the at least one client device (200) is always ready to be paired with the at least one camera (100), and thereafter to stream at least one live video stream from the at least one camera (100).
The device manager (300) of the present system is introduced generally to manage the at least one camera (100) and the at least one client device (200). In other words, the main responsibility of the device manager (300) is to gather as much information as it can in order for the present system to intelligently pair the at least one camera (100) and the at least one client device (200) during an occurrence of an event, of which will become more apparent at a later part of this document.
The device manager (300) is capable to discover the at least one camera (100) and other cameras (100) that may or may not be in the vicinity. The locations of all those discovered cameras (100) will subsequently be identified as well. Further, the device manager (300) allows an administrator of the present system to name and group all the discovered cameras (100). The device manager (300) is also capable to discover the at least one client device (200) and other client devices (200) that may or may not be in the vicinity. Those discovered client devices (200) will subsequently be registered to the device manager (300). The registration of the discovered client devices (200) can be done based on event type. For example, one of the discovered client devices (200) may be registered to the device manager (300) to receive notification relating to fire event. The registration of the discovered client devices (200) can also be done based on department of which they belong to. For example, one of the discovered client devices (200) belongs to personnel of traffic department, and therefore it may be registered to the device manager (300) to receive notification on traffic accident event, traffic violation event, and other events that relate to traffic. The locations of those discovered client devices (200), in terms of which access points they are associated with, will also be identified, and regularly updated. This is prudent as only the discovered devices (200) that are in the vicinity of an event will be notified. For example, two of the discovered client devices (200), namely Client Device A and Client Device B, are registered to the device manager (300) to receive notification relating to thievery event, wherein the location of Client Device A is in a town, and the location of Client Device B is out of the town. During a thievery event in the town, only Client Device A will be notified of the thievery event, whereas Client Device B will not be notified. Further, the device manager (300) allows the administrator of the present system to name and group all the discovered client devices (200) based on how they are registered to the device manager (300). The database (400) of the present system is introduced generally to store any information of the at least one camera (100) and any information of the at least one client device (200) that are presented by the device manager (300). Therefore, all the aforesaid information such as name, grouping, location, and any other information of the at least one camera (100) and the at least one client device (200) that are gathered by the device manager (300) are stored in the database (400). All event histories are stored in the database (400) as well.
The event detection means (501) of the present system is introduced to detect an occurrence of an event via the at least one camera (100). Once the event is detected, the event detection means (501) generates an event notification, which is then forwarded to the stream director means (502) to inform about the occurrence of the event. The event notification may comprise any information relating to the event and/or the at least one camera (100) that captures and records the occurrence of the event, such as event identity, event type, event timestamp, camera identity of the at least one camera (100) that captures and records the occurrence of the event, at least one event snapshot captured by the at least one camera (100), or a combination thereof. Such event notification contains important information, and therefore it is stored in the database (400).
The stream director means (502), which upon receiving the event notification will initially check if there is more than one similar event notification. If yes, the stream director means (502) will consolidate all those similar event notifications, and store such consolidation in the database (400).
Thereafter, the stream director means (502) forwards the event notification to the event analyzing means (503) for analysis. In other words, the event analyzing means (503) extracts information from the event notification, and analyzes it. This is crucial as the extracted and analyzed information will be converted to contextual information (900), which will be coupled with the at least one live video stream at a later stage. The coupling of the contextual information (900) to the at least one live video stream will be discussed at a later part of this document. The analysis can be carried out in many ways. One of the ways is to have the event analyzing means (503) extracts the at least one event snapshot captured by the at least one camera (100) from the event notification and analyzes it. For example, the event snapshot depicts a car that violates a traffic regulation. In this case, the event analyzing means (503) will extract and analyze any information that relates to the car such as the car registration number, the car model, the car colour, number of passengers, etc., and present the analyzed results as contextual information (900).
Soon after the stream director means (502) forwards the event notification to the event analyzing means (503), the stream director means (502) determines pairing configurations of the at least one camera (100) and the at least one client device (200), which can be one camera (100) to one client device (200), one camera (100) to many client devices (200), many cameras (100) to one client device (200), or many cameras (100) to many client devices (200). The selection of the at least one camera (100) and the at least one client device (200), and to pair them together are done based on several factors such as event type, event conditions, event severity, degree of informative contents captured by the at least one camera (100), location of the at least one client device (200), how the at least one client device (200) is registered to the device manager (300), network conditions, or a combination thereof. To be more specific, the stream director means (502) identifies the at least one camera (100) that detects the occurrence of the event, a camera group that the at least one camera (100) belongs to, and all other cameras (100) that belong to the just identified camera group. Subsequently, the stream director means (502) selects the at least one camera (100) and any other cameras (100) from the just identified cameras based on event type, event conditions, event severity, degree of informative contents captured by the at least one camera (100), network conditions, or a combination thereof. The stream director means (502) also identifies the at least one client device (200), a client device group that the at least one client device (200) belongs to, and all other client devices (200) that belong to the just identified client device group. Subsequently, the stream director means (502) selects the at least one client device (200) and any other client devices (200) from the just identified client devices (200) to be paired with the at least one selected camera (100) based on event type, event conditions, event severity, location of the at least one client device (200), how the at least one client device (200) is registered to the device manager (300), network conditions, or a combination thereof.
After the pairing configurations are determined, the stream director means (502) presents the pairing configurations in a form of matrix (700) based on a policy (505), which determines how the matrix (700) is formed. Examples of the pairing configurations in the form of matrix (700) are as below:
• one camera (100) to one client device (200)
[CAM(x1),Device(y1)]
• one camera (100) to many client devices (200)
[CAM(x1 ),Device{y1 ,y2, ... ,yn}]
· many cameras (100) to one client device (200)
[CAM{x1 ,x2, ... ,xn},Device(y 1 )]
• many cameras (100) to many client devices (200)
[CAM{x 1 , x2 , ... , n}, De vice{y , y2 , ... , y n}] Then, the stream director means (502) proceeds to generate at least one instruction (800) based on the matrix (700), which may comprise the matrix (700), the event identity, and any other information that may be necessary for the stream enforcer means (506) to perform the pairing of the at least one camera (100) and the at least one client device (200).
Additionally, the instruction (800) is tagged with a priority score (600), which is determined by the event prioritizing means (504) based on event type, event conditions, event severity, network conditions, number of active live video streams, priority score of active live video streams, number of new live video streams, or a combination thereof. In general, less critical event will be given a lower priority score (600), whereas more critical event will be given a higher priority score (600). For example, a fire event will have a higher priority score (600) than a traffic violation event (600). For another example, a fire event involving three buildings will have a higher priority score (600) than a fire event involving only one building (600). Therefore, when the network is congested, the event with higher priority score (600) will be prioritized and streamed from the at least one camera (100) to the at least one client device (200). Each priority score (600) contain a pre-defined stream configuration such as frame rate and resolution. Hence, the event with higher priority score (600) will be streamed from the at least one camera (100) to the at least one client device (200) with higher frame rate and resolution. Such priority score (600) contains important information, and therefore it is stored in the database (400).
The stream enforcer means (506), upon receiving the instructions (800), pairs the at least one camera (100) with the at least one client device (200) according to the pairing configurations determined by the stream director means (502) and the priority score (600) determined by the event prioritizing means (504), which ultimately allows the at least one client device (200) to stream the at least one video stream from the at least one camera (100). Specifically, the pairing is executed by having the stream enforcer means (506) to interact with and provide the at least one client device (200) with any information necessary for the at least one client device (200) to interact with and request the at least one live video stream from the at least one camera (100), wherein the information may comprise IP address of the at least one camera (100), the analyzed results produced by the event analyzing means (503), event timestamp, or a combination thereof. All the aforesaid information can be packed into a message for easy delivery and interaction between the stream enforcer means (506) and the at least one client device (200). Once the at least client device (200) receives the message, it is then able to request and stream the at least one live video stream from the at least one camera (200).
Thereafter, the at least one client device (200) informs the stream enforcer means (506) about the streaming. Then, the stream enforcer means (506) enters this information into the database (400) and starts to couple the at least one live video stream with contextual information (900) based on the analyzed results produced by the event analyzing means (503). The contextual information (900) can be presented in the forms of images, texts, audio, augmented live video streams, or a combination thereof. Referring now to Figure 2, it is shown an example of graphical user interface (GUI) of a client device (200). As it can be seen here, the client device (200) is streaming from six cameras (100), therefore having six live video streams. The first and fifth live video streams consist of contextual information (900), which is in the form of augmentation. The augmentation can be in many ways such as highlighting of a particular detail in the live video stream with a box, a colour, or a combination thereof. The live video streams are also accompanied by other contextual information (900) in the forms of images, texts, and audio, all of which may be presented in a separate window.
Once the streaming has ended, the at least one client device (200) will inform the stream enforcer means (506), which subsequently enters this information into the database (400).
We now refer to the second fundamental technical aspect, Figure 1 , and Figure 3. The video surveillance method of the present invention is a method that enables an event captured by at least one camera (100) to be thoroughly analysed, intelligently pairs the at least one camera (100) with at least one client device (200) via wired or wireless network based on several factors therefore allowing streaming of at least one live video stream from the at least one camera (100) to the at least one client device (200), and couples the at least one live video stream with contextual information (900) based on the analysed results.
The aforesaid method comprises the first step of having at least one camera (100) to continuously monitor an area. The at least one camera (100) should always be in an operation mode, capturing and recording images or videos. The second step is to have at least one client device (200) to be always in standby mode. In other words, the at least one client device (200) is always ready to be paired with the at least one camera (100), and thereafter to stream at least one live video stream from the at least one camera ( 00). The third step is to manage the at least one camera (100) and the at least one client device (200) (11). At this point, information relating to the at least one camera (100) and the at least one client device (200) is gathered as much as possible in order to allow the present method to intelligently pair the at least one camera (100) and the at least one client device (200) during an occurrence of an event, of which will become more apparent at a later part of this document.
Specifically, the management of the at least one camera (11) comprises discovering the at least one camera (100) and other cameras (100) that may or may not be in the vicinity, identifying the locations of all those discovered cameras (100), and naming and grouping all the discovered cameras (100).
Specifically, the management of the at least one client device (11) comprises discovering the at least one client device (200) and other client devices (200) that may or may not be in the vicinity, registering those discovered client devices (200), and naming and grouping all the discovered client devices (200) based on how they are registered. The registration of the discovered client devices (200) can be done based on event type. For example, one of the discovered client devices (200) may be registered to the device manager (300) to receive notification relating to fire event. The registration of the discovered client devices (200) can also be done based on department of which they belong to. For example, one of the discovered client devices (200) belongs to personnel of traffic department, and therefore it may be registered to the device manager (300) to receive notification on traffic accident event, traffic violation event, and other events that relate to traffic. The locations of those discovered client devices (200), in terms of which access points they are associated with, will also be identified, and regularly updated. This is prudent as only the discovered devices (200) that are in the vicinity of an event will be notified. For example, two of the discovered client devices (200), namely Client Device A and Client Device B, are registered to the device manager (300) to receive notification relating to thievery event, wherein the location of Client Device A is in a town, and the location of Client Device B is out of the town. During a thievery event in the town, only Client Device A will be notified of the thievery event, whereas Client Device B will not be notified. The subsequent step involves storing information of the at least one camera (100) and information of the at least one client device (200), which are gathered in the prior step, in a database (400) (12). Therefore, all the aforesaid information such as name, grouping, location, and any other information of the at least one camera (100) and the at least one client device (200) are stored in the database (400). All event histories are stored in the database (400) as well.
The next step involves detecting an occurrence of an event via the at least one camera (100) (13). Once the event is detected, an event notification is generated to inform about the occurrence of the event. The event notification may comprise any information relating to the event and/or the at least one camera (100) that captures and records the occurrence of the event, such as event identity, event type, event timestamp, camera identity of the at least one camera (100) that captures and records the occurrence of the event, at least one event snapshot captured by the at least one camera (100), or a combination thereof. Such event notification contains important information, and therefore it is stored in the database (400). If there is more than one similar event notification, those similar event notifications will be consolidated and such action will be entered into the database (400). Thereafter, the event notification is subjected to analysis. In other words, information from the event notification is extracted and analyzed (14). This is a crucial step as the extracted and analyzed information will be converted to contextual information (900), which will be coupled with the at least one live video stream at a later stage. The coupling of the contextual information (900) to the at least one live video stream will be discussed at a later part of this document. The analysis can be carried out in many ways. One of the ways is to extract the at least one event snapshot captured by the at least one camera (100) from the event notification and analyzes it. For example, the event snapshot depicts a car that violates a traffic regulation. In this case, any information that relates to the car such as the car registration number, the car model, the car colour, number of passengers, etc. will be extracted, analyzed, and presented as contextual information (900).
Soon after that, pairing configurations of the at least one camera (100) and the at least one client device (200) are determined (15). The pairing configurations can be one camera (100) to one client device (200), one camera (100) to many client devices (200), many cameras (100) to one client device (200), or many cameras (100) to many client devices (200). The selection of the at least one camera (100) and the at least one client device (200), and to pair them together are done based on several factors such as event type, event conditions, event severity, degree of informative contents captured by the at least one camera (100), location of the at least one client device (200), how the at least one client device (200) is registered to the device manager (300), network conditions, or a combination thereof. To be more specific, the step of determining pairing configurations further comprises a first step of identifying the at least one camera (100) that detects the occurrence of the event, a camera group that the at least one camera (100) belongs to, and all other cameras (100) that belong to the just identified camera group; a second step of selecting the at least one camera (100) and any other cameras (100) from the just identified cameras based on event type, event conditions, event severity, degree of informative contents captured by the at least one camera (100), network conditions, or a combination thereof; a third step of identifying the at least one client device (200), a client device group that the at least one client device (200) belongs to, and all other client devices (200) that belong to the just identified client device group; and a fourth step of selecting the at least one client device (200) and any other client devices (200) from the just identified client devices (200) to be paired with the at least one selected camera (100) based on event type, event conditions, event severity, location of the at least one client device (200), how the at least one client device (200) is registered to the device manager (300), network conditions, or a combination thereof. After the pairing configurations are determined (15), the pairing configurations are presented in a form of matrix (700) based on a policy (505), which determines how the matrix (700) is formed. Examples of the pairing configurations in the form of matrix (700) are as below:
• one camera (100) to one client device (200)
[CAM(x1),Device(y1)]
• one camera (100) to many client devices (200)
[CAM(x1 ),Device{y 1 ,y2, ... ,yn}]
· many cameras (100) to one client device (200)
[CAM{x1 ,x2,... ,xn},Device(y1)]
• many cameras (100) to many client devices (200)
[CAM{x1 ,x2, ... ,n}, Device{y 1 ,y2, ... ,yn}] Subsequently, at least one instruction (800) is generated based on the matrix (700), which may comprise the matrix (700), the event identity, and any other information that may be necessary for performing the pairing of the at least one camera (100) and the at least one client device (200). Additionally, the instruction (800) is tagged with a priority score (600), which is determined based on event type, event conditions, event severity, network conditions, number of active live video streams, priority score (600) of active live video streams, number of new live video streams, or a combination thereof (16). In general, less critical event will be given a lower priority score (600), whereas more critical event will be given a higher priority score (600). For example, a fire event will have a higher priority score (600) than a traffic violation event (600). For another example, a fire event involving three buildings will have a higher priority score (600) than a fire event involving only one building (600). Therefore, when the network is congested, the event with higher priority score (600) will be prioritized and streamed from the at least one camera (100) to the at least one client device (200). Each priority score (600) contain a pre-defined stream configuration such as frame rate and resolution. Hence, the event with higher priority score (600) will be streamed from the at least one camera (100) to the at least one client device (200) with higher frame rate and resolution. Such priority score (600) contains important information, and therefore it is stored in the database (400). The subsequent step involves pairing the at least one camera (100) with the at least one client device (200) according to the pairing configurations determined by the stream director means (502) and the priority score (600) determined by the event prioritizing means (504) (17), both of which are embedded within the instructions (800). This pairing ultimately allows the at least one client device (200) to stream the at least one video stream from the at least one camera (100) (18). Specifically, the pairing is executed by interacting with and provide the at least one client device (200) with any information necessary for the at least one client device (200) to interact with and request the at least one live video stream from the at least one camera (100), wherein the information may comprise IP address of the at least one camera (100), the analyzed results, event timestamp, or a combination thereof. All the aforesaid information can be packed into a message for easy delivery to the at least one client device (200). Once the at least client device (200) receives the message, it is then able to request and stream the at least one live video stream from the at least one camera (200) (18).
Thereafter, the at least one live video stream is coupled with contextual information (900) based on the analyzed results (19). The contextual information (900) can be presented in the forms of images, texts, audio, augmented live video streams, or a combination thereof. Referring now to Figure 2, it is shown an example of graphical user interface (GUI) of a client device (200). As it can be seen here, the client device (200) is streaming from six cameras (100), therefore having six live video streams. The first and fifth live video streams consist of contextual information (900), which is in the form of augmentation. The augmentation can be in many ways such as highlighting of a particular detail in the live video stream with a box, a colour, or a combination thereof. The live video streams are also accompanied by other contextual information (900) in the forms of images, texts, and audio, all of which may be presented in a separate window.

Claims

1) A video surveillance system comprises:
a) at least one camera (100) that is continuously monitoring an area; b) at least one client device (200) that is in standby mode to be paired with and stream at least one live video stream from the at least one camera (100);
c) a device manager (300) for managing the at least one camera (100) and the at least one client device (200); and
d) a database (400) for storing information of the at least one camera (100) and information of the at least one client device (200);
characterized in that the system further comprises;
e) an event detection means (501) for detecting an occurrence of an event via the at least one camera (100);
f) an event analyzing means (503) for extracting and analyzing information of the event;
g) a stream director means (502) for determining pairing configurations of the at least one camera (100) and the at least one client device (200) based on event type, event conditions, event severity, degree of informative contents captured by the at least one camera (100), location of the at least one client device (200), how the at least one client device (200) is registered to the device manager (300), network conditions, or a combination thereof, wherein the pairing configurations can be one camera (100) to one client device (200), one camera (100) to many client devices (200), many cameras (100) to one client device (200), or many cameras (100) to many client devices (200);
h) an event prioritizing means (504) for determining a priority score (600) for the event based on event type, event conditions, event severity, network conditions, number of active live video streams, priority score (600) of active live video streams, number of new live video streams, or a combination thereof; and i) a stream enforcer means (506) for pairing the at least one camera (100) and the at least one client device (200) according to the pairing configurations determined by the stream director means (502) and the priority score (600) determined by the event prioritizing means (504), thus allowing the at least one client device (200) to stream the at least one video stream from the at least one camera (100), and coupling the at least one live video stream with contextual information (900) based on the analyzed results produced by the event analyzing means (503).
A video surveillance system in accordance with claim 1 , wherein the event detection means (501) generates an event notification to inform the stream director means (502) about the occurrence of the event, wherein the event notification comprises event identity, event type, camera identity of the at least one camera (100) that detects the occurrence of the event, event timestamp, at least one event snapshot captured by the at least one camera (100), or a combination thereof.
A video surveillance system in accordance with claim 1 , wherein the stream director means (502) presents the pairing configurations in a form of matrix (700) based on a policy (505) that determines how the matrix (700) is formed, and thereafter generates at least one instruction (800) based on the matrix (700) to instruct the stream enforcer means (506) to perform the pairing, wherein the at least one instruction (800) is tagged with the priority score (600) determined by the event prioritizing means (504).
A video surveillance system in accordance with claim 1 , wherein the stream director means (502) determines the pairing configurations by: a) identifying the at least one camera (100) that detects the occurrence of the event, a camera group that the at least one camera (100) belongs to, and all other cameras (100) that belong to the just identified camera group; b) selecting the at least one camera (100) and any other cameras (100) from the just identified cameras (100) based on event type, event conditions, event severity, degree of informative contents captured by the at least one camera (100), network conditions, or a combination thereof;
c) identifying the at least one client device (200), a client device group that the at least one client device (200) belongs to, and all other client devices (200) that belong to the just identified client device group; and
d) selecting the at least one client device (200) and any other client devices (200) from the just identified client devices (200) to be paired with the at least one selected camera (100) based on event type, event conditions, event severity, location of the at least one client device (200), how the at least one client device (200) is registered to the device manager (300), network conditions, or a combination thereof.
A video surveillance system in accordance with claim 1 , wherein the stream enforcer means (506) pairs the at least one camera (100) and the at least one client device (200) according to the pairing configurations determined by the stream director means (502) and the priority score (600) determined by the event prioritizing means (504) by interacting with and providing the at least one client device (200) with IP address of the at least one camera (100), the analyzed results produced by the event analyzing means (503), event timestamp, or a combination thereof, in order to enable the at least one client device (200) to request the at least one live video stream from the at least one camera (100).
A video surveillance method comprises the steps of:
a) having at least one camera (100) to continuously monitor an area; b) having at least one client device (200) in standby mode to be paired with and stream at least one live video stream from the at least one camera (100); c) managing the at least one camera (100) and at least one client device (200) (11); and
d) storing information of the at least one camera (100) and information of the at least one client device (200) in a database (400) (12);
characterized in that the method further comprises the steps of:
e) detecting an occurrence of an event via the at least one camera (100) (13);
f) extracting and analyzing information of the event (14);
g) determining pairing configurations of the at least one camera (100) and the at least one client device (200) (15) based on event type, event conditions, event severity, degree of informative contents captured by the at least one camera (100), location of the at least one client device (200), how the at least one client device (200) is registered to the device manager (300), network conditions, or a combination thereof, wherein the pairing configurations can be one camera (100) to one client device (200), one camera (100) to many client devices (200), many cameras (100) to one client device (200), or many cameras (100) to many client devices (200);
h) determining a priority score for the event (16) based on event type, event conditions, event severity, network conditions, number of active live video streams, priority score (600) of active live video streams, number of new live video streams, or a combination thereof;
i) pairing the at least one camera (100) and the at least one client device (200) according to the determined pairing configurations and the determined priority score (600) (17);
j) streaming the at least one video stream from the at least one camera (100) to the at least one client device (200) (18); and k) coupling the at least one live video stream with contextual information (900) based on the analyzed results (19). A video surveillance method in accordance with claim 6 further comprises the step of generating an event notification to inform about the occurrence of the event, wherein the event notification comprises event identity, event type, camera identity of the at least one camera (100) that detects the occurrence of the event, event timestamp, at least one event snapshot captured by the at least one camera (100), or a combination thereof.
A video surveillance method in accordance with claim 6, wherein the pairing configurations are presented in a form of matrix (700) based on a policy (505) that determines how the matrix (700) is formed, and at least one instruction (800) is generated based on the matrix (700), wherein the at least one instruction (800) is tagged with the determined priority score (600).
A video surveillance method in accordance with claim 6, wherein the step of determining the pairing configurations further comprises the steps of: a) identifying the at least one camera (100) that detects the occurrence of the event, a camera group that the at least one camera (100) belongs to, and all other cameras (100) that belong to the just identified camera group;
b) selecting the at least one camera (100) and any other cameras (100) from the just identified cameras (100) based on event type, event conditions, event severity, degree of informative contents captured by the at least one camera (100), network conditions, or a combination thereof;
c) identifying the at least one client device (200), a client device group that the at least one client device (200) belongs to, and all other client devices (200) that belong to the just identified client device group; and
d) selecting the at least one client device (200) and any other client devices (200) from the just identified client devices (200) to be paired with the at least one selected camera (100) based on event type, event conditions, event severity, location of the at least one client device (200), how the at least one client device (200) is registered to the device manager (300), network conditions, or a combination thereof.
A video surveillance method in accordance with claim 6, wherein the step of pairing the at least one camera (100) and the at least one client device (200) according to the determined pairing configurations and determined priority score (600) further comprises the step of interacting with and providing the at least one client device (200) with IP address of the at least one camera (100), the analyzed results, event timestamp, or a combination thereof, in order to enable the at least one client device (200) to request the at least one live video stream from the at least one camera (100).
PCT/MY2014/000137 2013-11-28 2014-05-29 Video surveillance system and method WO2015080557A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI2013702289 2013-11-28
MYPI2013702289A MY183754A (en) 2013-11-28 2013-11-28 Video surveillance system and method

Publications (1)

Publication Number Publication Date
WO2015080557A1 true WO2015080557A1 (en) 2015-06-04

Family

ID=51570823

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2014/000137 WO2015080557A1 (en) 2013-11-28 2014-05-29 Video surveillance system and method

Country Status (2)

Country Link
MY (1) MY183754A (en)
WO (1) WO2015080557A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001057787A1 (en) * 2000-02-04 2001-08-09 Cernium, Inc. System for automated screening of security cameras
US20030025599A1 (en) * 2001-05-11 2003-02-06 Monroe David A. Method and apparatus for collecting, sending, archiving and retrieving motion video and still images and notification of detected events
US20110242317A1 (en) * 2010-04-05 2011-10-06 Alcatel-Lucent Usa Inc. System and method for distributing digital video streams from remote video surveillance cameras to display devices
US20110261202A1 (en) * 2010-04-22 2011-10-27 Boris Goldstein Method and System for an Integrated Safe City Environment including E-City Support

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001057787A1 (en) * 2000-02-04 2001-08-09 Cernium, Inc. System for automated screening of security cameras
US20030025599A1 (en) * 2001-05-11 2003-02-06 Monroe David A. Method and apparatus for collecting, sending, archiving and retrieving motion video and still images and notification of detected events
US20110242317A1 (en) * 2010-04-05 2011-10-06 Alcatel-Lucent Usa Inc. System and method for distributing digital video streams from remote video surveillance cameras to display devices
US20110261202A1 (en) * 2010-04-22 2011-10-27 Boris Goldstein Method and System for an Integrated Safe City Environment including E-City Support

Also Published As

Publication number Publication date
MY183754A (en) 2021-03-11

Similar Documents

Publication Publication Date Title
US10123051B2 (en) Video analytics with pre-processing at the source end
RU2484529C1 (en) Method of ranking video data
US8577082B2 (en) Security device and system
EP2903261B1 (en) Apparatus and method for detecting event from plurality of photographed images
CN107786838B (en) Video monitoring method, device and system
US20110109742A1 (en) Broker mediated video analytics method and system
EP2966852B1 (en) Video monitoring method, device and system
KR20110130033A (en) Active image monitoring system using motion pattern database, and method thereof
CN101860731A (en) Video information processing method, system and server
CN110555964B (en) Multi-data fusion key area early warning system and method
KR101964683B1 (en) Apparatus for Processing Image Smartly and Driving Method Thereof
CN107959812B (en) Monitoring data storage method, device and system and routing equipment
JP4959592B2 (en) Network video monitoring system and monitoring device
CN111405222B (en) Video alarm method, video alarm system and alarm picture acquisition method
KR102148346B1 (en) Apparatus for diagnosing service error and client device
US20110255590A1 (en) Data transmission apparatus and method, network data transmission system and method using the same
KR101423873B1 (en) Integrated management system with image monitoring system and method for detecting viloent events using the smae
WO2013131189A1 (en) Cloud-based video analytics with post-processing at the video source-end
CN113378616A (en) Video analysis method, video analysis management method and related equipment
US20190147734A1 (en) Collaborative media collection analysis
KR101178886B1 (en) High resolution digital ptz camera, monitoring system including high resolution digital ptz camera and data transmisson method thereof
KR101921868B1 (en) Intelligent video mornitoring system and method thereof
KR101420006B1 (en) System and Method for Camera Image Service based on Distributed Processing
KR102367584B1 (en) Automatic video surveillance system using skeleton video analysis technique
WO2015080557A1 (en) Video surveillance system and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14767165

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14767165

Country of ref document: EP

Kind code of ref document: A1