WO2008055306A1 - Machine learning system for graffiti deterrence - Google Patents

Machine learning system for graffiti deterrence Download PDF

Info

Publication number
WO2008055306A1
WO2008055306A1 PCT/AU2007/001714 AU2007001714W WO2008055306A1 WO 2008055306 A1 WO2008055306 A1 WO 2008055306A1 AU 2007001714 W AU2007001714 W AU 2007001714W WO 2008055306 A1 WO2008055306 A1 WO 2008055306A1
Authority
WO
WIPO (PCT)
Prior art keywords
graffiti
vandalism
signal
class
sounds
Prior art date
Application number
PCT/AU2007/001714
Other languages
French (fr)
Inventor
Seng Chu Tan
Svetha Venkatesh
Wilson Waters
Original Assignee
Curtin University Of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2006906265A external-priority patent/AU2006906265A0/en
Application filed by Curtin University Of Technology filed Critical Curtin University Of Technology
Publication of WO2008055306A1 publication Critical patent/WO2008055306A1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/16Actuation by interference with mechanical vibrations in air or other fluid
    • G08B13/1654Actuation by interference with mechanical vibrations in air or other fluid using passive vibration detection systems
    • G08B13/1672Actuation by interference with mechanical vibrations in air or other fluid using passive vibration detection systems using sonic detecting means, e.g. a microphone operating in the audio frequency range

Definitions

  • This invention has application to the deterrence of vandalism generally.
  • the invention concerns a system and method suitable for deterring the creation of graffiti on the windows of public transport vehicles.
  • the invention has application to other forms of vandalism and in many other situations besides windows or vehicles.
  • the invention is software and an embedded system.
  • Graffiti is a common form of vandalism often resulting in unsightly degradation of urban environments. Graffiti can be applied to almost any surface, and a popular location for poor quality and offensive graffiti is public transport infrastructure, favourite targets being buses and the carriages of trains!
  • Semi-permanent graffiti tags are, commonly found on walls, street signs, windows and the like. These are caused mainly by spray paints which can sometimes be painted over or removed, for instance using solvent-based paint remover.
  • a more destructive form of graffiti is the direct etching of a surface, usually a window, with sharp instruments such as knives, glass cutters, broken blue metal and screw drivers. This type of graffiti can only be remediated by replacement of the window.
  • the invention is a system for deterring vandalism, including the creation of graffiti, comprising:
  • a transducer mounted in the vicinity of expected vandalism, for instance on public transport infrastructure, to generate an audio signal, for instance representing the average sound pressure level in its vicinity;
  • Such a system is able to detect and classify sound types that might be emitted by the use of graffiti-causing instruments, for instance impacting on the windows, walls and other surfaces of public transport vehicles. This allows transmission of information about the event to a monitoring station so that appropriate action can be taken. This may involve visual observation and recording of the location. Alternatively, an alarm might be sounded or transmitted.
  • the transducer may be a peizo-electric transducer mounted in intimate contact with a window panel of a vehicle.
  • the system may be trained to recognise and discard environmental sound types and normal vehicle sound types, as well as non-ambient sound types such as those made by passengers in the normal course of travelling.
  • the system may be trained to positively recognise the sound types caused by use of one or more graffiti creating instruments in use.
  • Training may involve constructing a library of databases of different types of sounds. Models may then be built using the databases. Pattern classification techniques may then be employed to assign the class labels.
  • a two-stage classification scheme may be used in which a sound pressure-based classifier is used as a preliminary stage to eliminate blocks that fall below a preset threshold.
  • Multi-stage classification can also be employed.
  • a block is first classified using a single-class pattern classifier to eliminate background.
  • a multi-stage pattern classifier assigns the block a particular graffiti class label.
  • the training may take place in both offline and online modes.
  • the former approach uses a batch of hand labelled sound data to train the system using supervised learning algorithms.
  • the latter approach is used to address previously unseen sound data so that its future detection can be assured by retraining the system. The same can be done with the background noises.
  • the invention could provide an important element in mitigating the occurrence of graffiti on board public transit vehicles.
  • the invention is a method suitable for deterring vandalism, including the creation of graffiti, comprising the following steps: (i) generating an audio signal in the vicinity of expected vandalism, for instance in the vicinity of public transport infrastructure, for instance representing the average sound pressure level;
  • the invention is software for performing the method.
  • the invention is an embedded system for performing the method.
  • Fig. 1 is a schematic diagram of the graffiti detection system.
  • Fig. 2 is a classification schema.
  • Fig. 3 is a classifier example showing the input sampled signal and the classification output.
  • Fig. 4 is a graphical interface for the graffiti detection system.
  • a graffiti deterrence system 10 is seen to comprise an audio transducer 12 mounted on a window panel 14.
  • the transducer is able to capture background noise as well as the sonic events caused by graffiti instruments.
  • Signals generated by transducer 12 are acquired by signal acquisition module 16. If necessary this module also digitises the signal.
  • the digital audio signal is then divided into blocks at sampling module 18.
  • a feature extraction module 20 then extracts different sound features from the signal blocks.
  • a classifier 22 labels different blocks for further use.
  • a contact piezo-electric transducer 12 is mounted on the window 14. Transducer 12 captures the average sound pressure level around the window panel 14. Contact between the sensor 12 and the window surface 14 is necessary to ensure maximum signal-to-noise quality when capturing the sound impact of graffiti instruments on the window.
  • an analog transducer 12 would need to be digitised using an analog-to-digital converter (ADC), at 16.
  • ADC analog-to-digital converter
  • the ADC first samples the input analog signal at the Nyquist rate (which is twice the maximum frequency component of the signal). Most transducers have a frequency response of up to 8000Hz and therefore a sampling frequency of 16000Hz is suitable. After sampling, the ADC quantises the signal into digital form between 8 bits to 64 bits per sample; depending on the sensitivity required. The result is a digital equivalent of the input analog signal.
  • Further processing of the digital signal makes use of the blocks of digital samples. Capturing the blocks of signal information is achieved by sample windowing, and the size of the window determines the number of samples in each block.
  • the window size may be different in different installations, and it may be varied dynamically in any particular installation.
  • the windowing function can be applied in an overlapping, or non-overlapping, arrangement across the digital signal.
  • the feature extraction module 20 extracts important information bearing features of the digital signal blocks to allows enhanced discrimination between types of sounds.
  • the feature extraction process also retains the within-class characteristic features of the different types of sounds.
  • Feature vectors are extracted from the blocks of digital signals regardless of their size. Standard audio-based features such as the Mel Frequency Cepstral Coefficients (MFCC), pitch, centroid frequency are examples of the features that might be extracted.
  • MFCC Mel Frequency Cepstral Coefficients
  • the pattern classifier 22 labels the signal blocks it receives according to the feature vectors extracted.
  • the feature vectors identify different types of sound and the labels identify these as classes.
  • the classifier's design will depend on the purpose of the graffiti detection system. For example, if the system requires not only a report of the graffiti occurrence, but also a report of the actual graffiti instruments used (based on the classifier results), then a (N+l)-class classifier is used where N is the number of graffiti category classes. However, if the application does not require the reporting of the graffiti instruments, a 2-class classifier can be used instead. In this case, all graffiti category labels are merged into a single "graffiti" class.
  • a sound pressure-based classifier can be used as a preliminary stage to eliminate blocks that fall below a preset threshold. It therefore will allow only signals that exceed this threshold to proceed to the main classification stage described above.
  • Multi-stage classification can also be employed.
  • a block is first classified using a single-class pattern classifier to eliminate background.
  • a multi-stage pattern classifier assigns the block a particular graffiti class label.
  • the class assignment for a sample can be applied to the next consecutive sample, under predetermined circumstances, to reinforce the classification and therefore achieving a more reliable classification outcome with improved accuracy and reduced false positives.
  • a supervised pattern classifier 30 learns the key discriminating feature vectors of the sound types and automatically associates the different blocks with a corresponding label.
  • Training is usually conducted off-line from the main processing before installation.
  • Training background blocks can be collected from the vehicle of interest under various conditions, for example engine on and idle, cruising in light, medium and heavy traffic, and travelling at different speeds.
  • a background feature vector database 32 can be assembled.
  • databases 34 and 36 can be assembled for different graffiti-causing instruments, such as blue-metal, key, pointed object and marker pen.
  • a library 38 is constructed.
  • a training module 40 takes in multi-dimensional feature vectors from the various databases and produces models.
  • the model may be a unified model or a group of category models 42.
  • the various processing stages up to the creation of the category models are usually performed off-line.
  • the on-line classification task illustrated in Fig. 3, receives a block containing a feature vector 50 from the feature extraction module 20 and determines the closeness of fit between that feature vector and the available models. The result is the application of a class label, such as 'background', 'graffiti 1', 'grafitti2',..., 'graffitiN' 52.
  • Pattern classification may be achieved using various techniques, such as a decision tree, nearest neighbour, self-organising-map or support vector machine. In an alternative where there are no graffiti models available to the pattern classifier, the pattern classifier is trained using only the background models.
  • the task of the pattern classifier is to decide whether the feature vector of a block fits within a background model, or not. If so the block is assigned a background label. If not, the block fits outside the models and it will be assigned a non-background label. Blocks assigned the non-background label may comprise graffiti sounds or non-graffiti sounds, and further checking is required to determine the nature of the block. This may be done manually.
  • Graffiti classifications can be monitored by an operator. Whenever possible, incorrectly classified signals will be marked with the correct class by the operator, and the classifier will be retrained with these results.
  • a graphical interface 60 may be provided for the graffiti deterrence system, an example is illustrated in Fig. 4.
  • the interface provides functions for: training the background and graffiti category models;. classify input signal blocks from stored audio files or live from a transducer; and monitoring classification results, using the colour bars at the top right hand corner, to reinforce training results.
  • the classification at the sample level is visualised at the bottom of the interface, where the original input signal is shown together with sample classifications.
  • the system will trigger an event notification when graffiti is detected.
  • This event notification may be used in different ways depending on the application.
  • An offline system may store individual events in a log file for later review and retraining; see Table 1 below for an example of a log file. This method is suitable for CCTV systems where video is tagged on-the-fiy.
  • An online system could instantly inform an operator of the event so that a security guard is dispatched to the site.
  • the operator first listens to the audio signal and decides whether it is actually graffiti, or simply previously un-encountered background sound to be used as training data.
  • the graffiti detection system can be implemented on a computer with an audio capture card. Appropriate cabling and shielding against electrical interference in the microphone connection to the audio card should be employed to ensure good performance. In this particular form, the entire system can be implemented in software without further hardware assistance.
  • the system can be considered for implementation in embedded form.
  • all processing ranging from digitisation, feature extraction, training, classification and reporting is carried out in an embedded unit.
  • a link between the embedded unit or group of units and a central computer to facilitate the setting up, maintenance and reporting of the embedded units is required.
  • This communication link can either be wired or wireless although the latter will provide greater flexibility in deployment.
  • a two-stage classification process would help alleviate the issue of power limitation in embedded hardware. Also eliminating long cables would both reduce deployment cost as well as ensure better signal quality from the microphone.
  • An important feature of the embedded form of the invention is the use of two stage classification, since the first thresholding stage saves power.
  • transducers such as dynamic sensors, capacitative, inductive and carbon sensors.
  • the sensors may be positioned on other surfaces besides windows, and in some circumstances they may be placed adjacent the surface rather than in intimate contact with it.

Abstract

This invention concerns deterring vandalism. In particular the invention concerns a system and method suitable for deterring the creation of graffiti in or on public transport vehicles. The invention uses an audio transducer mounted on public transport infrastructure to generate a signal representing the average sound pressure level in its vicinity. A pattern classification engine is operable to receive samples of the signal and divide them into blocks. Then, recognise audio-based features associated with different types of sounds in blocks of the signal and, finally to assign a graffiti class label to each block.

Description

MACHINE LEARNING SYSTEM FOR GRAFFITI DETERRENCE
Technical Field
This invention has application to the deterrence of vandalism generally. In particular, but not exclusively, the invention concerns a system and method suitable for deterring the creation of graffiti on the windows of public transport vehicles. However, the invention has application to other forms of vandalism and in many other situations besides windows or vehicles. In other aspects the invention is software and an embedded system.
Background Art
Graffiti is a common form of vandalism often resulting in unsightly degradation of urban environments. Graffiti can be applied to almost any surface, and a popular location for poor quality and offensive graffiti is public transport infrastructure, favourite targets being buses and the carriages of trains!
Semi-permanent graffiti tags are, commonly found on walls, street signs, windows and the like. These are caused mainly by spray paints which can sometimes be painted over or removed, for instance using solvent-based paint remover.
A more destructive form of graffiti is the direct etching of a surface, usually a window, with sharp instruments such as knives, glass cutters, broken blue metal and screw drivers. This type of graffiti can only be remediated by replacement of the window.
Current graffiti mitigation methods for windows rely on protecting the glass surface from the damage caused by graffiti implements. These involve covering the glass surfaces by a layer of Perspex screen or anti-graffiti film. However, these measures do not directly prevent the onset of the graffiti nor are they capable of collecting further information about the graffiti event. Furthermore, the cost of replacing the damaged screen or film is still high.
Disclosure of the Invention
In a first aspect the invention is a system for deterring vandalism, including the creation of graffiti, comprising:
(a) a transducer mounted in the vicinity of expected vandalism, for instance on public transport infrastructure, to generate an audio signal, for instance representing the average sound pressure level in its vicinity;
(b) a pattern classification engine operable to:
(i) receive samples of the signal and divide them into blocks;
(ii) recognise audio-based features associated with different types of sounds in each block of the signal; and (iii)assign a graffiti class label to each block.
Such a system is able to detect and classify sound types that might be emitted by the use of graffiti-causing instruments, for instance impacting on the windows, walls and other surfaces of public transport vehicles. This allows transmission of information about the event to a monitoring station so that appropriate action can be taken. This may involve visual observation and recording of the location. Alternatively, an alarm might be sounded or transmitted.
The transducer may be a peizo-electric transducer mounted in intimate contact with a window panel of a vehicle.
The system may be trained to recognise and discard environmental sound types and normal vehicle sound types, as well as non-ambient sound types such as those made by passengers in the normal course of travelling. The system may be trained to positively recognise the sound types caused by use of one or more graffiti creating instruments in use.
These types of training increase the robustness of the system, cutting down on false positives. It also allows for new graffiti sounds to be trained and labelled.
Training may involve constructing a library of databases of different types of sounds. Models may then be built using the databases. Pattern classification techniques may then be employed to assign the class labels.
A two-stage classification scheme may be used in which a sound pressure-based classifier is used as a preliminary stage to eliminate blocks that fall below a preset threshold.
Multi-stage classification can also be employed. Here, a block is first classified using a single-class pattern classifier to eliminate background. Subsequently, a multi-stage pattern classifier assigns the block a particular graffiti class label.
The training may take place in both offline and online modes. The former approach uses a batch of hand labelled sound data to train the system using supervised learning algorithms. The latter approach is used to address previously unseen sound data so that its future detection can be assured by retraining the system. The same can be done with the background noises.
The invention could provide an important element in mitigating the occurrence of graffiti on board public transit vehicles.
In another aspect the invention is a method suitable for deterring vandalism, including the creation of graffiti, comprising the following steps: (i) generating an audio signal in the vicinity of expected vandalism, for instance in the vicinity of public transport infrastructure, for instance representing the average sound pressure level;
(ii) dividing the signal into blocks; (iii) recognising audio-based features associated with different types of sounds in blocks of the signal; and
(iv) assigning a class label to each block.
In a further aspect the invention is software for performing the method.
In a further aspect the invention is an embedded system for performing the method.
Brief Description of the Drawings
An example of the invention will now be described with reference to the accompanying drawings, in which:
Fig. 1 is a schematic diagram of the graffiti detection system.
Fig. 2 is a classification schema.
Fig. 3 is a classifier example showing the input sampled signal and the classification output.
Fig. 4 is a graphical interface for the graffiti detection system.
Best Modes of the Invention
Referring first to Fig. 1, a graffiti deterrence system 10 is seen to comprise an audio transducer 12 mounted on a window panel 14. The transducer is able to capture background noise as well as the sonic events caused by graffiti instruments. Signals generated by transducer 12 are acquired by signal acquisition module 16. If necessary this module also digitises the signal. The digital audio signal is then divided into blocks at sampling module 18. A feature extraction module 20 then extracts different sound features from the signal blocks. Finally, a classifier 22 labels different blocks for further use.
A decision is made in the classifier for each labelled block as to whether the block belongs to a graffiti class or simply the background class. Since the surrounding environment may be unique for every deployment the main challenge is to ensure that the decision is not adversely affected by variations in ambient noise. This is accomplished by the classifier retraining itself following the results of operator decisions. As a result the classifier is able to gradually adapt itself to its local environment over time, and so give increasingly accurate performance.
In a particular example a contact piezo-electric transducer 12 is mounted on the window 14. Transducer 12 captures the average sound pressure level around the window panel 14. Contact between the sensor 12 and the window surface 14 is necessary to ensure maximum signal-to-noise quality when capturing the sound impact of graffiti instruments on the window.
If an analog transducer 12 were used, the signal would need to be digitised using an analog-to-digital converter (ADC), at 16. The ADC first samples the input analog signal at the Nyquist rate (which is twice the maximum frequency component of the signal). Most transducers have a frequency response of up to 8000Hz and therefore a sampling frequency of 16000Hz is suitable. After sampling, the ADC quantises the signal into digital form between 8 bits to 64 bits per sample; depending on the sensitivity required. The result is a digital equivalent of the input analog signal.
Further processing of the digital signal makes use of the blocks of digital samples. Capturing the blocks of signal information is achieved by sample windowing, and the size of the window determines the number of samples in each block. The window size may be different in different installations, and it may be varied dynamically in any particular installation. The windowing function can be applied in an overlapping, or non-overlapping, arrangement across the digital signal. The feature extraction module 20 extracts important information bearing features of the digital signal blocks to allows enhanced discrimination between types of sounds. The feature extraction process also retains the within-class characteristic features of the different types of sounds. Feature vectors are extracted from the blocks of digital signals regardless of their size. Standard audio-based features such as the Mel Frequency Cepstral Coefficients (MFCC), pitch, centroid frequency are examples of the features that might be extracted.
The pattern classifier 22 labels the signal blocks it receives according to the feature vectors extracted. The feature vectors identify different types of sound and the labels identify these as classes. There are two common types of pattern classifiers, supervised and un-supervised classifiers. Both types of classifier are suitable.
The classifier's design will depend on the purpose of the graffiti detection system. For example, if the system requires not only a report of the graffiti occurrence, but also a report of the actual graffiti instruments used (based on the classifier results), then a (N+l)-class classifier is used where N is the number of graffiti category classes. However, if the application does not require the reporting of the graffiti instruments, a 2-class classifier can be used instead. In this case, all graffiti category labels are merged into a single "graffiti" class.
One way to reduce the computational burden of the system is to use a two-stage classification scheme. In this implementation, a sound pressure-based classifier can be used as a preliminary stage to eliminate blocks that fall below a preset threshold. It therefore will allow only signals that exceed this threshold to proceed to the main classification stage described above.
Multi-stage classification can also be employed. Here, a block is first classified using a single-class pattern classifier to eliminate background. Subsequently, a multi-stage pattern classifier assigns the block a particular graffiti class label. Regardless of the class, or number of stages in the pattern classifier, the class assignment for a sample can be applied to the next consecutive sample, under predetermined circumstances, to reinforce the classification and therefore achieving a more reliable classification outcome with improved accuracy and reduced false positives.
One example of a supervised pattern classifier 30 is shown in Fig. 2. This supervised pattern classifier learns the key discriminating feature vectors of the sound types and automatically associates the different blocks with a corresponding label.
Training is usually conducted off-line from the main processing before installation. Training background blocks can be collected from the vehicle of interest under various conditions, for example engine on and idle, cruising in light, medium and heavy traffic, and travelling at different speeds. From these data blocks a background feature vector database 32 can be assembled. In the same way databases 34 and 36 can be assembled for different graffiti-causing instruments, such as blue-metal, key, pointed object and marker pen. In this way a library 38 is constructed.
A training module 40 takes in multi-dimensional feature vectors from the various databases and produces models. Depending on the choice of pattern classifier, the model may be a unified model or a group of category models 42.
The various processing stages up to the creation of the category models are usually performed off-line. The on-line classification task, illustrated in Fig. 3, receives a block containing a feature vector 50 from the feature extraction module 20 and determines the closeness of fit between that feature vector and the available models. The result is the application of a class label, such as 'background', 'graffiti 1', 'grafitti2',..., 'graffitiN' 52. Pattern classification may be achieved using various techniques, such as a decision tree, nearest neighbour, self-organising-map or support vector machine. In an alternative where there are no graffiti models available to the pattern classifier, the pattern classifier is trained using only the background models. The task of the pattern classifier, in this case, is to decide whether the feature vector of a block fits within a background model, or not. If so the block is assigned a background label. If not, the block fits outside the models and it will be assigned a non-background label. Blocks assigned the non-background label may comprise graffiti sounds or non-graffiti sounds, and further checking is required to determine the nature of the block. This may be done manually.
Graffiti classifications, can be monitored by an operator. Whenever possible, incorrectly classified signals will be marked with the correct class by the operator, and the classifier will be retrained with these results.
In the event a graffiti sound type has been detected, further evidence about the event can be collected, for instance using an on-board CCTV camera system. The time and date of the graffiti event can be used to index to retrieve the corresponding footages from the video stream of the relevant camera. An automatic detection system makes it much easier to retrieve the appropriate footage from the potentially large video database.
A graphical interface 60 may be provided for the graffiti deterrence system, an example is illustrated in Fig. 4. The interface provides functions for: training the background and graffiti category models;. classify input signal blocks from stored audio files or live from a transducer; and monitoring classification results, using the colour bars at the top right hand corner, to reinforce training results.
The classification at the sample level is visualised at the bottom of the interface, where the original input signal is shown together with sample classifications.
The system will trigger an event notification when graffiti is detected. This event notification may be used in different ways depending on the application. An offline system may store individual events in a log file for later review and retraining; see Table 1 below for an example of a log file. This method is suitable for CCTV systems where video is tagged on-the-fiy.
Figure imgf000011_0001
Table 1.
An online system could instantly inform an operator of the event so that a security guard is dispatched to the site. The operator first listens to the audio signal and decides whether it is actually graffiti, or simply previously un-encountered background sound to be used as training data.
The graffiti detection system can be implemented on a computer with an audio capture card. Appropriate cabling and shielding against electrical interference in the microphone connection to the audio card should be employed to ensure good performance. In this particular form, the entire system can be implemented in software without further hardware assistance.
Alternatively, the system can be considered for implementation in embedded form. Here, all processing ranging from digitisation, feature extraction, training, classification and reporting is carried out in an embedded unit. A link between the embedded unit or group of units and a central computer to facilitate the setting up, maintenance and reporting of the embedded units is required. This communication link can either be wired or wireless although the latter will provide greater flexibility in deployment. A two-stage classification process would help alleviate the issue of power limitation in embedded hardware. Also eliminating long cables would both reduce deployment cost as well as ensure better signal quality from the microphone. An important feature of the embedded form of the invention is the use of two stage classification, since the first thresholding stage saves power.
Although the invention has been described with reference to a particular example, it should be appreciated that it could be exemplified in many other forms and in combination with other features not mentioned above. For instance, different transducers could be deployed, such as dynamic sensors, capacitative, inductive and carbon sensors. The sensors may be positioned on other surfaces besides windows, and in some circumstances they may be placed adjacent the surface rather than in intimate contact with it.
It will also be appreciated that the invention may find application in relation to other types of vandalism that those described above.

Claims

Claims
1. A system for deterring vandalism comprising:
(a) a transducer mounted to generate an audio signal in the vicinity of expected vandalism; and
(b) a pattern classification engine operable to:
(i) receive samples of the audio signal and divide them into blocks;
(ii) recognise audio-based features associated with different types of sounds in each block of the signal; and (iii)assign a class label to each block.
2. A system according to claim 1 for deterring vandalism on public infrastructure.
3. A system according to claim 2 wherein the public infrastructure is a public transit vehicle.
4. A system according to any one of the preceding claims wherein the vandalism is the creation of graffiti.
5. A system according to any one of the preceding claims, wherein the transducer is a peizo-electric transducer.
6. A system according to any one of the preceding claims wherein the transducer is mounted in intimate contact with a surface that is vulnerable to vandalism.
7. A system according to claim 6 wherein the surface is a window.
8. A system according to any one of the preceding claims, wherein the system can recognise and discard background sound types that are not related to acts of vandalism.
9. A system according to claim 8 wherein the background sound is selected from the group consisting of: environmental sound types, normal vehicle sound types, non-ambient sound types and sounds made by people in the vicinity of the transducer other than vandalism related sounds.
10. A system according to claim 8 or 9, that is trained in use to better recognise the sound types caused by use of one or more graffiti creating instruments.
11. A system according to claim 10 wherein the sound types are recognised by screening a sound type against a library of databases of different types of sounds.
12. A system according to claim 11, wherein models are built using the databases.
13. A system according to claim 12, wherein pattern classification techniques are employed to assign the class labels.
14. A system according to claim 13, wherein multi-stage pattern classification techniques are employed to assign the class labels.
15. A system according to claim 14, wherein a first stage of pattern classification involves filtering received blocks to eliminate those that fall below a given threshold.
16. A system according to claim 14 or 15, wherein a first stage of pattern classification eliminates background and a subsequent stage assigns a graffiti class label.
17. A system according to claim 1, wherein the class assignment of a sample is taken into account when assigning the class of a later sample.
18. A system according to any preceding claim wherein the system is an embedded system.
19. A method for deterring vandalism comprising the following steps: (i) generating an audio signal in the vicinity of expected vandalism;
(ii) dividing the signal into blocks; (iii) recognising audio-based features associated with different types of sounds in each block of the signal; and (iv) assigning a class label to each block.
20. A method according to claim 19 wherein the vandalism comprises the creation of graffiti.
21. Software for performing the method according to claim 19 or 20.
Dated this eighth day of November 2007
Curtin University of Technology Patent Attorneys for the Applicant;
F B RICE & CO
PCT/AU2007/001714 2006-11-09 2007-11-08 Machine learning system for graffiti deterrence WO2008055306A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2006906265 2006-11-09
AU2006906265A AU2006906265A0 (en) 2006-11-09 Graffiti Deterrence

Publications (1)

Publication Number Publication Date
WO2008055306A1 true WO2008055306A1 (en) 2008-05-15

Family

ID=39364105

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2007/001714 WO2008055306A1 (en) 2006-11-09 2007-11-08 Machine learning system for graffiti deterrence

Country Status (1)

Country Link
WO (1) WO2008055306A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013007015A1 (en) * 2013-04-23 2014-10-23 Torsten Gross Monitoring system for a motor vehicle and a method for monitoring a motor vehicle
GB2556687A (en) * 2016-10-19 2018-06-06 Ford Global Tech Llc Vehicle ambient audio classification via neural network machine learning
US10037632B2 (en) 2016-09-01 2018-07-31 Ford Global Technologies, Llc Surrogate vehicle sensors
EP3968296A1 (en) 2020-09-09 2022-03-16 Schweizerische Bundesbahnen SBB Method for monitoring a system, monitoring system and monitoring module
US11682384B2 (en) 2020-02-27 2023-06-20 Axis Ab Method, software, and device for training an alarm system to classify audio of an event

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6150927A (en) * 1998-03-30 2000-11-21 Nextbus Information Systems, Llc Anti-vandalism detector and alarm system
US6288643B1 (en) * 1999-06-07 2001-09-11 Traptec Corporation Graffiti detection system and method of using the same
US6862253B2 (en) * 2002-10-23 2005-03-01 Robert L. Blosser Sonic identification system and method
US6961002B2 (en) * 1999-06-07 2005-11-01 Traptec Corporation Sonic detection system and method of using the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6150927A (en) * 1998-03-30 2000-11-21 Nextbus Information Systems, Llc Anti-vandalism detector and alarm system
US6288643B1 (en) * 1999-06-07 2001-09-11 Traptec Corporation Graffiti detection system and method of using the same
US6961002B2 (en) * 1999-06-07 2005-11-01 Traptec Corporation Sonic detection system and method of using the same
US6862253B2 (en) * 2002-10-23 2005-03-01 Robert L. Blosser Sonic identification system and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SACCHI C. ET AL.: "A Distributed Surveillance System for Detection of Abandoned Objects in Unmanned Railways Environments", IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, vol. 49, no. 5, September 2000 (2000-09-01), pages 2013 - 2026, XP011064136 *
SACCHI C. ET AL.: "Use of Natural Networks for Behaviour Understanding in Railway Transport Monitoring Applications", PROCEEDINGS OF INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, vol. 1, 10 October 2001 (2001-10-10), pages 541 - 544 *
VU ET AL.: "Audio-Video Event Recognition System for Public Transport Security", THE INSTITUTION OF ENGINEERING AND TECHNOLOGY CONFERENCE ON CRIME AND SECURITY, 14 June 2006 (2006-06-14), pages 414 - 419, XP008104141 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013007015A1 (en) * 2013-04-23 2014-10-23 Torsten Gross Monitoring system for a motor vehicle and a method for monitoring a motor vehicle
US10037632B2 (en) 2016-09-01 2018-07-31 Ford Global Technologies, Llc Surrogate vehicle sensors
GB2556687A (en) * 2016-10-19 2018-06-06 Ford Global Tech Llc Vehicle ambient audio classification via neural network machine learning
US10276187B2 (en) 2016-10-19 2019-04-30 Ford Global Technologies, Llc Vehicle ambient audio classification via neural network machine learning
US10885930B2 (en) 2016-10-19 2021-01-05 Ford Global Technologies, Llc Vehicle ambient audio classification via neural network machine learning
US11682384B2 (en) 2020-02-27 2023-06-20 Axis Ab Method, software, and device for training an alarm system to classify audio of an event
EP3968296A1 (en) 2020-09-09 2022-03-16 Schweizerische Bundesbahnen SBB Method for monitoring a system, monitoring system and monitoring module
EP3968297A1 (en) 2020-09-09 2022-03-16 Schweizerische Bundesbahnen SBB Method for monitoring a railway system, monitoring system and monitoring module

Similar Documents

Publication Publication Date Title
CN109300471B (en) Intelligent video monitoring method, device and system for field area integrating sound collection and identification
KR101794543B1 (en) Fault Detection and Classification System of Railway Point Machine by Sound Analysis
Cao et al. Excavation equipment recognition based on novel acoustic statistical features
US8164484B2 (en) Detection and classification of running vehicles based on acoustic signatures
Carletti et al. Audio surveillance using a bag of aural words classifier
CN109616140B (en) Abnormal sound analysis system
EP2255344B1 (en) Intrusion detection system with signal recognition
JP4242422B2 (en) Sudden event recording and analysis system
EP3147902B1 (en) Sound processing apparatus, sound processing method, and computer program
CN105913059B (en) Automatic identification system for vehicle VIN code and control method thereof
Conte et al. An ensemble of rejecting classifiers for anomaly detection of audio events
WO2008055306A1 (en) Machine learning system for graffiti deterrence
WO2008148289A1 (en) An intelligent audio identifying system and method
WO2011025460A1 (en) Method and system for event detection
CN106650644B (en) The recognition methods of driver's hazardous act and system
KR20190019713A (en) System and method for classifying based on support vector machine for uav sound identification
Shanthakumari et al. Mask RCNN and Tesseract OCR for vehicle plate character recognition
KR100839815B1 (en) A system for recognizing and scanning the car number and a method thereof
CN112598865A (en) Monitoring method and system for preventing cable line from being damaged by external force
Park et al. Identifying tonal frequencies in a Lofargram with convolutional neural networks
Dedeoglu et al. Surveillance using both video and audio
CN112960506B (en) Elevator warning sound detection system based on audio frequency characteristics
CN116699521B (en) Urban noise positioning system and method based on environmental protection
CN113928947B (en) Elevator maintenance process detection method and device
Kim et al. Discriminative training of GMM via log-likelihood ratio for abnormal acoustic event classification in vehicular environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07815517

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC

122 Ep: pct application non-entry in european phase

Ref document number: 07815517

Country of ref document: EP

Kind code of ref document: A1