WO2008110002A1 - A method and a system for automatic evaluation of digital files - Google Patents

A method and a system for automatic evaluation of digital files Download PDF

Info

Publication number
WO2008110002A1
WO2008110002A1 PCT/CA2008/000481 CA2008000481W WO2008110002A1 WO 2008110002 A1 WO2008110002 A1 WO 2008110002A1 CA 2008000481 W CA2008000481 W CA 2008000481W WO 2008110002 A1 WO2008110002 A1 WO 2008110002A1
Authority
WO
WIPO (PCT)
Prior art keywords
database
files
reference files
learning model
training set
Prior art date
Application number
PCT/CA2008/000481
Other languages
French (fr)
Inventor
Jocelyn Desbiens
Original Assignee
Webhitcontest Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/684,900 external-priority patent/US7873634B2/en
Priority claimed from CA2581466A external-priority patent/CA2581466C/en
Application filed by Webhitcontest Inc. filed Critical Webhitcontest Inc.
Priority to EP08733585A priority Critical patent/EP2126727A4/en
Publication of WO2008110002A1 publication Critical patent/WO2008110002A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]

Definitions

  • the present invention relates to a method and a system for automatic evaluation of digital files. More specifically, the present invention is concerned with a method for dynamic hit scoring.
  • Li et al. (US 2004/0231498) present a method for music classification comprising extracting features of a target file; extracting features of a training set; and classifying music signals.
  • Blum et al. (US 5,918,223) describe a method for classifying and ranking the similarity between individual audio files comprising supplying sets containing the features of classes of sound to a training algorithm yielding a set of vectors for each class of sound; submitting a target audio file to the same training algorithm to obtain a vector for the target file; and calculating the correlation distance between the vector for the target file and the vectors of each class, whereby the class which has the smallest distance to the target file is the class assigned to the target file.
  • Flannery et al. (US 6,545,209) present methods for classifying music according to similarity using a distance measure.
  • Gang et al. disclose a method for predicting musical preferences of a user, comprising the steps of building a first set of information relative to a catalog of musical selection; building a second set of information relative to the tastes of the user; and combining the information of the second set with the information of the first set to provide an expected rating for every song in the catalog.
  • a method for automatic evaluation of target files comprising the steps of building a database of reference files; for each target file, forming a training set comprising files from the database of reference files and building a test set from features of the target file; dynamically generating a learning model from the training set; and applying the learning model to the test set, whereby a value corresponding to the target file is predicted.
  • a method for automatic evaluation of songs comprising the step of building a database of hit songs; for each song to be evaluated, forming a training set comprising songs from the database of hit songs and building a test set from features of the song to be evaluated; dynamically generating a learning model from the training set; and applying the learning model to the test set; whereby a score corresponding to the song to be evaluated is predicted.
  • Figure is a flow chart of an embodiment of a method according to an aspect of the present invention.
  • Figure 2 illustrates a class separating hyperplane in a
  • An embodiment of the method according to an aspect of the present invention generally comprises an analysis step (step 100) and a dynamic scoring step (step 200).
  • a database of reference files is built.
  • the database of reference files comprises hit songs for example.
  • a number of files, such as MP3 files or other digital format, for example, of songs identified as hits are gathered, and numerical features that represent each one of them are extracted to form n-dimensional vectors of numerical features that represent each file, referred to as feature vectors, as well known in the art.
  • a number of features including for example timbre, rhythm, melody frequency etc, are extracted from the files to yield feature vectors corresponding to each one of them.
  • a hit score method a number of 84 features were extracted for example.
  • the feature vectors are stored in a database along with relevant information, such as for example, artist's name, genre etc (112).
  • Each MP3 file is rated, according to a predefined scheme, and also stored in a database (113).
  • the references files here exemplified as hit songs MP3, are selected according to a predefined scheme of rating. In the case of hit songs, scoring may originate from a number of sources, including for example, compilation of top 50 rankings, sales, air play etc.
  • the dynamic scoring step (step 200) generally comprises a learning phase and a predicting phase.
  • files from the reference database in regards to which the target file will be assessed are selected in a training set.
  • the training set is built by finding n closest feature vectors of the target file's feature vector in the database of feature vectors of the hits (116).
  • the distance/similarity between the target file's feature vector and each feature vector of the database of hits may be determined by using the Euclidian distance, the cosine distance or the Jensen-Shannon distribution similarity, as well known to people in the art.
  • PCA Principal Component Analysis
  • Singular Value Decomposition Singular Value Decomposition
  • non linear regression techniques known in the art such as (but not limited to): Neural Networks, Support Vector Machines, Generalized Additive Model, Classification and Regression Tree, Multivariate Adaptative Regression Splines, Hierarchical Mixture of Experts, Supervised Principal Component Analysis.
  • PCA is an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.
  • PCA can be used for dimensionality reduction in a data set while retaining those characteristics of the data set that contribute most to its variance, by keeping lower-order principal components and ignoring higher- order ones. Such low-order components often contain the "most important" aspects of the data. But this is not necessarily the case, depending on the application.
  • PCA is used to project multidimensional data to a lower dimensional space retaining as much as possible variability of the data. This technique is widely used in many areas of applied statistics. It is natural since interpretation and visualization in a fewer dimensional space is easier than in many dimensional space. Especially, dimensionality can be reduced to two or three, then plots and visual representation may be used to try and find some structure in the data. [0029] PCA is one of the techniques used for dimension reductions, as will now be briefly described.
  • M is an m-by-n matrix whose entries come from the field K, which is either the field of real numbers or the field of complex numbers. Then there exists a factorization of the form
  • M U ⁇ V ⁇ where U is an m-by-m unitary matrix over K, the matrix ⁇ is m-by-n with nonnegative numbers on the diagonal and zeros off the diagonal, and V * denotes the conjugate transpose of V, an n-by-n unitary matrix over K.
  • V * denotes the conjugate transpose of V, an n-by-n unitary matrix over K.
  • the matrix V thus contains a set of orthonormal "input" or
  • the matrix U contains a set of orthonormal "output" basis vector directions for M.
  • the matrix ⁇ contains the singular values, which can be thought of as scalar "gain controls" by which each corresponding input is multiplied to give a corresponding output.
  • i c J
  • i I ⁇ f )
  • the /c-th component can be found by subtracting the first k - 1 principal components from x:
  • PCA was described above as a technique, in Step 118, for reducing dimensionality of the learning set feature space, the learning set comprising nearest neighbors from the target file.
  • a learning model is dynamically generated (130), using a well-known theoretical algorithm called Support Vector Model (SVM) for example, as will now be described, using a software MCubixTM developed by Diagnos Inc. for example.
  • SVM Support Vector Model
  • a set of training examples S/ ⁇ (Xi,yi),(x2,y2),---,(x ⁇ ,y ⁇ ) ⁇ of size / from a fixed but unknown distribution p(x,y) describing the learning task is given.
  • the term-frequency vectors x, represent documents and y, ⁇ 1 indicates whether a document has been labeled with the positive class or not.
  • the SVM aims to find a decision rule h c: x ⁇ ⁇ -1 ,+1 ⁇ that classifies the documents as accurately as possible based on the training set S / .
  • the constraints require that all training examples are classified correctly, allowing for some outliers symbolized by the slack variables ⁇ , . If a training example lies on the wrong side of the hyperplane, the corresponding ⁇ , is greater than 0.
  • the factor C is a parameter that allows for trading off training error against model complexity. In the limit C ⁇ °° no training error is allowed. This setting is called hard margin SVM.
  • a classifier with finite C is also called a soft margin Support Vector Machine.
  • All training examples with ⁇ , > 0 at the solution are called support vectors.
  • the Support vectors are situated right at the margin (see the solid circle and squares in Figure 2) and define the hyperplane.
  • the definition of a hyperplane by the support vectors is especially advantageous in high dimensional feature spaces because a comparatively small number of parameters — the ⁇ in the sum of equation — is required.
  • SVM have been introduced within the context of statistical learning theory and structural risk minimization. In the methods one solves convex optimization problems, typically quadratic programs.
  • Least Squares Support Vector Machines are reformulations to standard SVM. LS- SVM are closely related to regularization networks and Gaussian processes but additionally emphasize and exploit primal-dual interpretations. Links between kernel versions of classical pattern recognition algorithms such as kernel Fisher discriminant analysis and extensions to unsupervised learning, recurrent networks and control also exist.
  • two hyper-parameters are needed, including a regularization parameter ⁇ , determining the trade-off between the fitting error minimization and smoothness, and the bandwidth ⁇ 2 , at least in the common case of the RBF kernel.
  • These two hyper-parameters are automatically computed by doing a grid search over the parameter space and picking the minimum. This procedure iteratively zooms to the candidate optimum.
  • the learning model generated in step 130 is applied to the test set, so as to determine a value corresponding to the target song (150).
  • the rating of the target file is based on the test set and the learning set, the target file being assessed relative to the training set.
  • a storing phase may further comprise storing the predicted values in a result database.
  • the learning model is discarded after prediction for the target file (160), before the method is applied to another file to be evaluated (170).
  • the training set is rebuilt by updating the closest neighbours and hyper- parameters are automatically updated, resulting in a dynamic scoring method.
  • the present method allows an automatic learning on a dynamic neighbourhood.
  • the method may be used for pre-selecting songs in the contest of a hit contest for example, typically based on the popularity of the songs.
  • the present adaptative method may be applied to evaluate a range of type of files, i.e. compression format, nature of files etc... with an increased accuracy in highly non-linear fields, by providing a dynamic learning phase.

Abstract

There is provided a method for automatic evaluation of target files, comprising the steps of building a database of reference files; for each target file, forming a training set comprising files from the database of reference files and building a test set from features of the target file; dynamically generating a learning model from the training set; and applying the learning model to the test set, whereby a value corresponding to the target file is predicted.

Description

TITLE OF THE INVENTION
A method and a system for automatic evaluation of digital files
FIELD OF THE INVENTION
[0001] The present invention relates to a method and a system for automatic evaluation of digital files. More specifically, the present invention is concerned with a method for dynamic hit scoring.
BACKGROUND OF THE INVENTION
[0002] A number of files classification or prediction methods have been developed over the years.
[0003] Li et al. (US 2004/0231498) present a method for music classification comprising extracting features of a target file; extracting features of a training set; and classifying music signals.
[0004] Blum et al. (US 5,918,223) describe a method for classifying and ranking the similarity between individual audio files comprising supplying sets containing the features of classes of sound to a training algorithm yielding a set of vectors for each class of sound; submitting a target audio file to the same training algorithm to obtain a vector for the target file; and calculating the correlation distance between the vector for the target file and the vectors of each class, whereby the class which has the smallest distance to the target file is the class assigned to the target file. [0005] Alcade et al. (US 7,081 ,579, US 2006/0254411) teach a method and system for music recommendation, comprising the steps of providing a database of references, and extracting features of a target file to determine its parameter vector using a FTT analysis method. Then the distance between the target file's parameter vector and each file's parameter vector of the database of references is determined to score the target file according to the target file's distance with each file of database of references via a linear regression method.
[0006] Foote et al. (US 2003/0205124), Platt et al. (US
2006/0107823), Flannery et al. (US 6,545,209) present methods for classifying music according to similarity using a distance measure.
[0007] Gang et al. (US 2003/0089218) disclose a method for predicting musical preferences of a user, comprising the steps of building a first set of information relative to a catalog of musical selection; building a second set of information relative to the tastes of the user; and combining the information of the second set with the information of the first set to provide an expected rating for every song in the catalog.
[0008] There is a need in the art for a method for dynamic hit scoring.
SUMMARY OF THE INVENTION
[0009] More specifically, there is provided a method for automatic evaluation of target files, comprising the steps of building a database of reference files; for each target file, forming a training set comprising files from the database of reference files and building a test set from features of the target file; dynamically generating a learning model from the training set; and applying the learning model to the test set, whereby a value corresponding to the target file is predicted.
[0010] There is further provided a method for automatic evaluation of songs, comprising the step of building a database of hit songs; for each song to be evaluated, forming a training set comprising songs from the database of hit songs and building a test set from features of the song to be evaluated; dynamically generating a learning model from the training set; and applying the learning model to the test set; whereby a score corresponding to the song to be evaluated is predicted.
[0011] Other objects, advantages and features of the present invention will become more apparent upon reading of the following non- restrictive description of embodiments thereof, given by way of example only with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] In the appended drawings:
[0013] Figure is a flow chart of an embodiment of a method according to an aspect of the present invention; and
[0014] Figure 2 illustrates a class separating hyperplane in a
Support Vector Model technique used in the method of Figure 1. DESCRIPTION OF EMBODIMENTS OF THE INVENTION
[0015] An embodiment of the method according to an aspect of the present invention generally comprises an analysis step (step 100) and a dynamic scoring step (step 200).
[0016] The method will be described herein in the case of music files for example, in relation to the flowchart of Figure 1.
[0017] In the analysis step (step 100), a database of reference files is built. In the case of music files, the database of reference files comprises hit songs for example.
[0018] A number of files, such as MP3 files or other digital format, for example, of songs identified as hits are gathered, and numerical features that represent each one of them are extracted to form n-dimensional vectors of numerical features that represent each file, referred to as feature vectors, as well known in the art.
[0019] A number of features, including for example timbre, rhythm, melody frequency etc, are extracted from the files to yield feature vectors corresponding to each one of them. In a hit score method, a number of 84 features were extracted for example.
[0020] The feature vectors are stored in a database along with relevant information, such as for example, artist's name, genre etc (112). Each MP3 file is rated, according to a predefined scheme, and also stored in a database (113). [0021] The references files, here exemplified as hit songs MP3, are selected according to a predefined scheme of rating. In the case of hit songs, scoring may originate from a number of sources, including for example, compilation of top 50 rankings, sales, air play etc.
[0022] For each target file, i.e. each song to be assessed in the present example, numerical features that represent the target file are extracted to form corresponding feature vectors (114).
[0023] The dynamic scoring step (step 200) generally comprises a learning phase and a predicting phase.
[0024] In the learning phase, files from the reference database in regards to which the target file will be assessed are selected in a training set.
The training set is built by finding n closest feature vectors of the target file's feature vector in the database of feature vectors of the hits (116). The distance/similarity between the target file's feature vector and each feature vector of the database of hits may be determined by using the Euclidian distance, the cosine distance or the Jensen-Shannon distribution similarity, as well known to people in the art.
[0025] The training set is then simplified by reducing its dimension
(118), but using either Principal Component Analysis (PCA) or Singular Value Decomposition (SVD) for example or non linear regression techniques known in the art such as (but not limited to): Neural Networks, Support Vector Machines, Generalized Additive Model, Classification and Regression Tree, Multivariate Adaptative Regression Splines, Hierarchical Mixture of Experts, Supervised Principal Component Analysis. [0026] PCA is an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. PCA can be used for dimensionality reduction in a data set while retaining those characteristics of the data set that contribute most to its variance, by keeping lower-order principal components and ignoring higher- order ones. Such low-order components often contain the "most important" aspects of the data. But this is not necessarily the case, depending on the application.
[0027] The main idea behind the principal component analysis is to represent multidimensional data with less number of variables retaining main features of the data. It is inevitable that by reducing dimensionality some features of the data will be lost. It is hoped that these lost features are comparable with the "noise" and they do not tell much about underlying population.
[0028] PCA is used to project multidimensional data to a lower dimensional space retaining as much as possible variability of the data. This technique is widely used in many areas of applied statistics. It is natural since interpretation and visualization in a fewer dimensional space is easier than in many dimensional space. Especially, dimensionality can be reduced to two or three, then plots and visual representation may be used to try and find some structure in the data. [0029] PCA is one of the techniques used for dimension reductions, as will now be briefly described.
[0030] Suppose M is an m-by-n matrix whose entries come from the field K, which is either the field of real numbers or the field of complex numbers. Then there exists a factorization of the form
M = UΣV\ where U is an m-by-m unitary matrix over K, the matrix Σ is m-by-n with nonnegative numbers on the diagonal and zeros off the diagonal, and V* denotes the conjugate transpose of V, an n-by-n unitary matrix over K. Such a factorization is called a singular-value decomposition of M.
[0031] The matrix V thus contains a set of orthonormal "input" or
"analysing" basis vector directions for M. The matrix U contains a set of orthonormal "output" basis vector directions for M. The matrix Σ contains the singular values, which can be thought of as scalar "gain controls" by which each corresponding input is multiplied to give a corresponding output.
[0032] A common convention is to order the values Σ,,, in non- increasing fashion. In this case, the diagonal matrix Σ is uniquely determined by M (though the matrices U and V are not).
[0033] Assuming zero empirical mean (the empirical mean of the distribution has been subtracted from the data set), the principal component w-\ of a data set x can be defined as: W1 = arg max varfwτx} = arg max E I (wτx) \ lw||=i c J ||w||=i IΛ f )
[0034] With the first k - 1 components, the /c-th component can be found by subtracting the first k - 1 principal components from x:
fc-l
Xfe_l = X - ]jPwiW .jT τ,x i=i
and by substituting this as the new data set to find a principal component in
wfe = arg max E { (wτxjt_i) 2 }
M w =1 L J
[0035] The PCA transform is therefore equivalent to finding the singular value decomposition of the data matrix X,
X = WΣVT,
and then obtaining the reduced-space data matrix Y by projecting X down into the reduced space defined by only the first L singular vectors, WL:
Y = WL TX = ΣLVL T
[0036] The matrix W of singular vectors of X is equivalents the matrix W of eigenvectors of the matrix of observed covariance C = X Xτ, XXT = WΣ2WT
[0037] It is often the case that different variables have completely different scaling. For examples one of the variables may have been measured in meters and another one in centimeters (by design or accident). Eigenvalues of the matrix is scale dependent. If one column of the data matrix X is multiplied by some scale factor (say s) then variance of this variable is increase by s2 and this variable can dominate whole covariance matrix and hence the whole eigenvalues and eigenvectors. It is necessary to take precautions when dealing with the data. If it is possible to bring all data to the same scale using some underlying physical properties then it should be done. If scale of the data is unknown then it is better to use correlation matrix instead of the covariance matrix. It is in general a recommended option in many statistical packages.
[0038] It should be noted that since scale affects eigenvalues and eigenvectors then interpretation of the principal components derived by these two methods can be completely different. In real life applications care should be taken when using correlation matrix. Outliers in the observation can affect covariance and hence correlation matrix. It is recommended to use robust estimation for covariance (in a simple case by rejecting of outliers). When using robust estimates covariance matrix may not be non-negative and some eigenvalues might be negative. In many applications, it is not important since only the principal components corresponding to the largest eigenvalues are of interest.
[0039] In either case, the number of significant variables (principal axis or singular axis) is kept to a minimum. There are many recommendations for the selection of dimension, as follows. [0040] i) The proportion of variances : if the first two components account for 70%-90% or more of the total variance then further components might be irrelevant (See problem with scaling above).
[0041] ii) Components below certain level can be rejected. If components have been calculated using a correlation matrix, often those components with variance less than 1 are rejected. It might be dangerous. Especially if one variable is almost independent of others then it might give rise to the component with variance less than 1. It does not mean that it is uninformative.
[0042] iii) If the uncertainty (usually expressed as standard deviation) of the observations is known, then components with variances less than that, certainly can be rejected.
[0043] iv) If scree plots (scree plot is the plot of the eigenvalues, or variances of principal components, against their indices) show elbow then components with variances less than this elbow can be rejected.
[0044] According to a cross-validation technique, one value of the observation is removed (Xy) then, using principal components, this value is predicted and it is done for all data points. If adding the component does not improve prediction power, then this component can be rejected. This technique is computer intensive.
[0045] PCA was described above as a technique, in Step 118, for reducing dimensionality of the learning set feature space, the learning set comprising nearest neighbors from the target file. [0046] Based on these n closest feature vectors, a learning model is dynamically generated (130), using a well-known theoretical algorithm called Support Vector Model (SVM) for example, as will now be described, using a software MCubix™ developed by Diagnos Inc. for example.
[0047] SVM is a supervised learning algorithm that has been successful in proving itself an efficient and accurate text classification technique. Like other supervised machine learning algorithms, an SVM works in two steps. In the first step — the training step — it learns a decision boundary in input space from preclassified training data. In the second step — the classification step — it classifies input vectors according to the previously learned decision boundary. A single support vector machine can only separate two classes — a positive class (y = +1) and a negative class (y = -1).
[0048] In the training step the following problem is solved. A set of training examples S/ = {(Xi,yi),(x2,y2),---,(xι,yι)} of size / from a fixed but unknown distribution p(x,y) describing the learning task is given. The term-frequency vectors x, represent documents and y, = ±1 indicates whether a document has been labeled with the positive class or not. The SVM aims to find a decision rule h c: x → {-1 ,+1} that classifies the documents as accurately as possible based on the training set S/.
[0049] An hypothesis space is given by the functions f(x) = sgn(wx + /?) where w and b are parameters that are learned in the training step and which determine the class separating hyperplane, shown in Figure 2. Computing this hyperplane is equivalent to solving the following optimization problem: 1 ' minimize: «- i
1 ' subject to: '^=I
[0050] The constraints require that all training examples are classified correctly, allowing for some outliers symbolized by the slack variables ξ , . If a training example lies on the wrong side of the hyperplane, the corresponding ξ , is greater than 0. The factor C is a parameter that allows for trading off training error against model complexity. In the limit C → °° no training error is allowed. This setting is called hard margin SVM. A classifier with finite C is also called a soft margin Support Vector Machine. Instead of solving the above optimization problem directly, it is easier to solve the following dual optimisation problem:
\V(a) = ∑ °> + 2 ^L ^ /ΛM/Λ.ΛJXIX/ minimize: ' ^ 1 l« "- i] r j-
]T yτtxx = 0 subject to: '». ", '<'
[0051] All training examples with α, > 0 at the solution are called support vectors. The Support vectors are situated right at the margin (see the solid circle and squares in Figure 2) and define the hyperplane. The definition of a hyperplane by the support vectors is especially advantageous in high dimensional feature spaces because a comparatively small number of parameters — the α in the sum of equation — is required. [0052] SVM have been introduced within the context of statistical learning theory and structural risk minimization. In the methods one solves convex optimization problems, typically quadratic programs. Least Squares Support Vector Machines (LS-SVM) are reformulations to standard SVM. LS- SVM are closely related to regularization networks and Gaussian processes but additionally emphasize and exploit primal-dual interpretations. Links between kernel versions of classical pattern recognition algorithms such as kernel Fisher discriminant analysis and extensions to unsupervised learning, recurrent networks and control also exist.
[0053] In order to make an LS-SVM model, two hyper-parameters are needed, including a regularization parameter γ, determining the trade-off between the fitting error minimization and smoothness, and the bandwidth σ2, at least in the common case of the RBF kernel. These two hyper-parameters are automatically computed by doing a grid search over the parameter space and picking the minimum. This procedure iteratively zooms to the candidate optimum.
[0054] As the learning model is thus generated (130), in the predicting phase (300), a test set is built from the features of the target file (140), and the test set feature space dimensionality is reduced (142) as known in the art, by using a technique such as Principal component analysis (PCA) or Singular Value Decomposition (SVD), keeping the same number of significant variables (principal axis or singular axis) as the number of significant variables used in the learning set, as described hereinabove.
[0055] Then, the learning model generated in step 130 is applied to the test set, so as to determine a value corresponding to the target song (150). The rating of the target file is based on the test set and the learning set, the target file being assessed relative to the training set.
[0056] A storing phase may further comprise storing the predicted values in a result database.
[0057] The learning model is discarded after prediction for the target file (160), before the method is applied to another file to be evaluated (170).
[0058] As new files (hit songs) in the database of reference file appear, the training set is rebuilt by updating the closest neighbours and hyper- parameters are automatically updated, resulting in a dynamic scoring method.
[0059] As people in the art will appreciate, the present method allows an automatic learning on a dynamic neighbourhood.
[0060] As exemplified hereinabove, the method may be used for pre-selecting songs in the contest of a hit contest for example, typically based on the popularity of the songs.
[0061] Depending on a nature of the scale used for evaluation, the present adaptative method may be applied to evaluate a range of type of files, i.e. compression format, nature of files etc... with an increased accuracy in highly non-linear fields, by providing a dynamic learning phase.
[0062] Although the present invention has been described hereinabove by way of embodiments thereof, it may be modified, without departing from the nature and teachings of the subject invention as defined in the appended claims.

Claims

1. A method for automatic evaluation of target files, comprising the steps of: building a database of reference files; for each target file, forming a training set comprising files from the database of reference files and building a test set from features of the target file; dynamically generating a learning model from the training set; and applying the learning model to the test set; whereby a value corresponding to the target file is predicted.
2. The method of claim 1 , further comprising storing the predicted value in a result database.
3. The method of claim 1 , wherein said step of building a database of reference files comprises collecting files identified as references according to a predefined scheme, under a digital format; obtaining feature vectors of each of the collected files; and storing the feature vectors in a database of reference files.
4. The method of claim 3, wherein said step of building a database of reference files further comprises storing a rate, defined according to the predefined scheme, of each of the reference files in a score database.
5. The method of claim 3, wherein said step of obtaining feature vectors of each of the collected files comprises extracting, from the collected files, a number of features to yield reference feature vectors.
6. The method of claim 3, wherein said step of storing the feature vectors in a database of reference files comprises storing the feature vectors along with relevant information.
7. The method of claim 1 , wherein said step of forming a training set comprises finding closest reference files in the database of reference files, versus which the target file is to be assessed.
8. The method of claim 1 , wherein said step of forming a training set comprises extracting a feature vector of the target file; and finding n closest feature vectors of the target file's feature vector in the database of reference files.
9. The method of claim 1 , wherein said finding n closest feature vectors comprises using one of: i) Euclidian distance, ii) cosine distance and iii) Jensen-Shannon distribution similarity.
10. The method of claim 1 , wherein said step of forming a training set comprising files from the database of reference files and building a test set from features of the target file further comprises reducing the dimensionality of the training set and reducing the dimensionality of the test set.
11. The method of claim 10, wherein said steps of reducing the dimensionality are done by using one of: i) Principal Component Analysis (PCA) and ii) Singular Value Decomposition (SVD).
12. The method of claim 10, wherein said steps of reducing the dimensionality are done by a non-linear regression technique.
13. The method of claim 10, wherein said steps of reducing the dimensionality are done by one of: Neural Networks, Support Vector Machines, Generalized Additive Model, Classification and Regression Tree, Multivariate Adaptative Regression Splines, Hierarchical Mixture of Experts and Supervised Principal Component Analysis.
14. The method of claim 7, wherein said step of dynamically generating a learning model comprises using closest reference files in the database of reference files.
15. The method of claim 8, wherein said step of dynamically generating a learning model comprises using the n closest feature vectors of the target file's feature vector in the database of reference files.
16. The method of claim 7, wherein said step of dynamically generating a learning model comprises applying a prediction model to the closest reference files in the database of reference files.
17. The method of claim 8, wherein said step of dynamically generating a learning model comprises applying a prediction model to the n closest feature vectors of the target file's feature vector in the database of reference files.
18. The method of claim 7, wherein said step of dynamically generating a learning model comprises applying a prediction model comprises applying a Support Vector Model.
19. The method of claim 8, wherein said step of dynamically generating a learning model comprises applying a Support Vector Model to the n closest feature vectors of the target file's feature vector in the database of reference files.
20. The method of claim 1 , wherein said step of dynamically generating a learning model comprises using MCubix™ by Diagnos.
21. The method of claim 1 , further comprising discarding the learning model after prediction for the target file.
22. The method of claim 1 , wherein said step of building a training set comprises rebuilding the training set as new files appear in the database of reference files.
23. The method of claim 7, wherein said step of forming a training set comprises finding new closest reference files in the database of reference files as new reference files appear in the database of reference files
24. The method of claim 8, wherein said step of forming a training set comprises updating the closest neighbours as new reference files appear in the database of reference files.
25. The method of claim 1 , wherein said step of generating a learning model comprises automatically generating a learning model based on a dynamic neighbourhood represented by the training set.
26. The method of claim 1 , wherein the target files are song files, the reference files are hits, and the target files are assessed according to the hits.
27. A method for automatic evaluation of songs, comprising the step of: building a database of hit songs; for each song to be evaluated, forming a training set comprising songs from the database of hit songs and building a test set from features of the song to be evaluated; dynamically generating a learning model from the training set; and applying the learning model to the test set; whereby a score corresponding to the song to be evaluated is predicted.
28. The method of claim 27, wherein said step of building a database of hit songs comprises collecting songs identified as hits according to a predefined scheme of rating.
29. The method of claim 27, wherein said step of building a database of hit songs comprises extracting, from the hint songs, a number of features representing the hint songs.
30. The method of claim 29, wherein said step of building a database of hit songs comprises extracting, from the hint songs, a number of features representing the hint songs, including at least one of timbre, rhythm, frequency and melody of the hit songs.
31. The method of claim 27, wherein said step of forming a training set and a test set comprise extracting a number of features, from the song to be evaluated, representing the song to be evaluated.
32. The method of claim 27, wherein said step of dynamically generating a learning model from the training set comprises using MCubix™ of Diagnos.
PCT/CA2008/000481 2007-03-12 2008-03-12 A method and a system for automatic evaluation of digital files WO2008110002A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP08733585A EP2126727A4 (en) 2007-03-12 2008-03-12 A method and a system for automatic evaluation of digital files

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US11/684,900 US7873634B2 (en) 2007-03-12 2007-03-12 Method and a system for automatic evaluation of digital files
CA2,581,466 2007-03-12
US11/684,900 2007-03-12
CA2581466A CA2581466C (en) 2007-03-12 2007-03-12 A method and a system for automatic evaluation of digital files

Publications (1)

Publication Number Publication Date
WO2008110002A1 true WO2008110002A1 (en) 2008-09-18

Family

ID=39758954

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2008/000481 WO2008110002A1 (en) 2007-03-12 2008-03-12 A method and a system for automatic evaluation of digital files

Country Status (2)

Country Link
EP (1) EP2126727A4 (en)
WO (1) WO2008110002A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8121830B2 (en) 2008-10-24 2012-02-21 The Nielsen Company (Us), Llc Methods and apparatus to extract data encoded in media content
US8359205B2 (en) 2008-10-24 2013-01-22 The Nielsen Company (Us), Llc Methods and apparatus to perform audio watermarking and watermark detection and extraction
US8508357B2 (en) 2008-11-26 2013-08-13 The Nielsen Company (Us), Llc Methods and apparatus to encode and decode audio for shopper location and advertisement presentation tracking
US8666528B2 (en) 2009-05-01 2014-03-04 The Nielsen Company (Us), Llc Methods, apparatus and articles of manufacture to provide secondary content in association with primary broadcast media content
CN104254887A (en) * 2012-09-24 2014-12-31 希特兰布公司 A method and system for assessing karaoke users
US8959016B2 (en) 2002-09-27 2015-02-17 The Nielsen Company (Us), Llc Activating functions in processing devices using start codes embedded in audio
US9100132B2 (en) 2002-07-26 2015-08-04 The Nielsen Company (Us), Llc Systems and methods for gathering audience measurement data
US9197421B2 (en) 2012-05-15 2015-11-24 The Nielsen Company (Us), Llc Methods and apparatus to measure exposure to streaming media
US9210208B2 (en) 2011-06-21 2015-12-08 The Nielsen Company (Us), Llc Monitoring streaming media content
US9313544B2 (en) 2013-02-14 2016-04-12 The Nielsen Company (Us), Llc Methods and apparatus to measure exposure to streaming media
US9336784B2 (en) 2013-07-31 2016-05-10 The Nielsen Company (Us), Llc Apparatus, system and method for merging code layers for audio encoding and decoding and error correction thereof
US9380356B2 (en) 2011-04-12 2016-06-28 The Nielsen Company (Us), Llc Methods and apparatus to generate a tag for media content
US9609034B2 (en) 2002-12-27 2017-03-28 The Nielsen Company (Us), Llc Methods and apparatus for transcoding metadata
US9667365B2 (en) 2008-10-24 2017-05-30 The Nielsen Company (Us), Llc Methods and apparatus to perform audio watermarking and watermark detection and extraction
US9711152B2 (en) 2013-07-31 2017-07-18 The Nielsen Company (Us), Llc Systems apparatus and methods for encoding/decoding persistent universal media codes to encoded audio
US9711153B2 (en) 2002-09-27 2017-07-18 The Nielsen Company (Us), Llc Activating functions in processing devices using encoded audio and detecting audio signatures
US9762965B2 (en) 2015-05-29 2017-09-12 The Nielsen Company (Us), Llc Methods and apparatus to measure exposure to streaming media
CN112989606A (en) * 2021-03-16 2021-06-18 上海哥瑞利软件股份有限公司 Data algorithm model checking method, system and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020083060A1 (en) * 2000-07-31 2002-06-27 Wang Avery Li-Chun System and methods for recognizing sound and music signals in high noise and distortion
US7277766B1 (en) * 2000-10-24 2007-10-02 Moodlogic, Inc. Method and system for analyzing digital audio files
US7304229B2 (en) * 2003-11-28 2007-12-04 Mediatek Incorporated Method and apparatus for karaoke scoring

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020083060A1 (en) * 2000-07-31 2002-06-27 Wang Avery Li-Chun System and methods for recognizing sound and music signals in high noise and distortion
EP1307833A2 (en) * 2000-07-31 2003-05-07 Shazam Entertainment Limited Method for search in an audio database
US7277766B1 (en) * 2000-10-24 2007-10-02 Moodlogic, Inc. Method and system for analyzing digital audio files
US7304229B2 (en) * 2003-11-28 2007-12-04 Mediatek Incorporated Method and apparatus for karaoke scoring

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2126727A4 *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9100132B2 (en) 2002-07-26 2015-08-04 The Nielsen Company (Us), Llc Systems and methods for gathering audience measurement data
US8959016B2 (en) 2002-09-27 2015-02-17 The Nielsen Company (Us), Llc Activating functions in processing devices using start codes embedded in audio
US9711153B2 (en) 2002-09-27 2017-07-18 The Nielsen Company (Us), Llc Activating functions in processing devices using encoded audio and detecting audio signatures
US9609034B2 (en) 2002-12-27 2017-03-28 The Nielsen Company (Us), Llc Methods and apparatus for transcoding metadata
US9900652B2 (en) 2002-12-27 2018-02-20 The Nielsen Company (Us), Llc Methods and apparatus for transcoding metadata
US10467286B2 (en) 2008-10-24 2019-11-05 The Nielsen Company (Us), Llc Methods and apparatus to perform audio watermarking and watermark detection and extraction
US8554545B2 (en) 2008-10-24 2013-10-08 The Nielsen Company (Us), Llc Methods and apparatus to extract data encoded in media content
US10134408B2 (en) 2008-10-24 2018-11-20 The Nielsen Company (Us), Llc Methods and apparatus to perform audio watermarking and watermark detection and extraction
US11809489B2 (en) 2008-10-24 2023-11-07 The Nielsen Company (Us), Llc Methods and apparatus to perform audio watermarking and watermark detection and extraction
US11386908B2 (en) 2008-10-24 2022-07-12 The Nielsen Company (Us), Llc Methods and apparatus to perform audio watermarking and watermark detection and extraction
US8121830B2 (en) 2008-10-24 2012-02-21 The Nielsen Company (Us), Llc Methods and apparatus to extract data encoded in media content
US11256740B2 (en) 2008-10-24 2022-02-22 The Nielsen Company (Us), Llc Methods and apparatus to perform audio watermarking and watermark detection and extraction
US9667365B2 (en) 2008-10-24 2017-05-30 The Nielsen Company (Us), Llc Methods and apparatus to perform audio watermarking and watermark detection and extraction
US8359205B2 (en) 2008-10-24 2013-01-22 The Nielsen Company (Us), Llc Methods and apparatus to perform audio watermarking and watermark detection and extraction
US8508357B2 (en) 2008-11-26 2013-08-13 The Nielsen Company (Us), Llc Methods and apparatus to encode and decode audio for shopper location and advertisement presentation tracking
US11948588B2 (en) 2009-05-01 2024-04-02 The Nielsen Company (Us), Llc Methods, apparatus and articles of manufacture to provide secondary content in association with primary broadcast media content
US10555048B2 (en) 2009-05-01 2020-02-04 The Nielsen Company (Us), Llc Methods, apparatus and articles of manufacture to provide secondary content in association with primary broadcast media content
US11004456B2 (en) 2009-05-01 2021-05-11 The Nielsen Company (Us), Llc Methods, apparatus and articles of manufacture to provide secondary content in association with primary broadcast media content
US10003846B2 (en) 2009-05-01 2018-06-19 The Nielsen Company (Us), Llc Methods, apparatus and articles of manufacture to provide secondary content in association with primary broadcast media content
US8666528B2 (en) 2009-05-01 2014-03-04 The Nielsen Company (Us), Llc Methods, apparatus and articles of manufacture to provide secondary content in association with primary broadcast media content
US9380356B2 (en) 2011-04-12 2016-06-28 The Nielsen Company (Us), Llc Methods and apparatus to generate a tag for media content
US9681204B2 (en) 2011-04-12 2017-06-13 The Nielsen Company (Us), Llc Methods and apparatus to validate a tag for media
US9210208B2 (en) 2011-06-21 2015-12-08 The Nielsen Company (Us), Llc Monitoring streaming media content
US11252062B2 (en) 2011-06-21 2022-02-15 The Nielsen Company (Us), Llc Monitoring streaming media content
US11784898B2 (en) 2011-06-21 2023-10-10 The Nielsen Company (Us), Llc Monitoring streaming media content
US9515904B2 (en) 2011-06-21 2016-12-06 The Nielsen Company (Us), Llc Monitoring streaming media content
US11296962B2 (en) 2011-06-21 2022-04-05 The Nielsen Company (Us), Llc Monitoring streaming media content
US10791042B2 (en) 2011-06-21 2020-09-29 The Nielsen Company (Us), Llc Monitoring streaming media content
US9838281B2 (en) 2011-06-21 2017-12-05 The Nielsen Company (Us), Llc Monitoring streaming media content
US9197421B2 (en) 2012-05-15 2015-11-24 The Nielsen Company (Us), Llc Methods and apparatus to measure exposure to streaming media
US9209978B2 (en) 2012-05-15 2015-12-08 The Nielsen Company (Us), Llc Methods and apparatus to measure exposure to streaming media
CN104254887A (en) * 2012-09-24 2014-12-31 希特兰布公司 A method and system for assessing karaoke users
US9313544B2 (en) 2013-02-14 2016-04-12 The Nielsen Company (Us), Llc Methods and apparatus to measure exposure to streaming media
US9357261B2 (en) 2013-02-14 2016-05-31 The Nielsen Company (Us), Llc Methods and apparatus to measure exposure to streaming media
US9336784B2 (en) 2013-07-31 2016-05-10 The Nielsen Company (Us), Llc Apparatus, system and method for merging code layers for audio encoding and decoding and error correction thereof
US9711152B2 (en) 2013-07-31 2017-07-18 The Nielsen Company (Us), Llc Systems apparatus and methods for encoding/decoding persistent universal media codes to encoded audio
US11057680B2 (en) 2015-05-29 2021-07-06 The Nielsen Company (Us), Llc Methods and apparatus to measure exposure to streaming media
US10694254B2 (en) 2015-05-29 2020-06-23 The Nielsen Company (Us), Llc Methods and apparatus to measure exposure to streaming media
US11689769B2 (en) 2015-05-29 2023-06-27 The Nielsen Company (Us), Llc Methods and apparatus to measure exposure to streaming media
US10299002B2 (en) 2015-05-29 2019-05-21 The Nielsen Company (Us), Llc Methods and apparatus to measure exposure to streaming media
US9762965B2 (en) 2015-05-29 2017-09-12 The Nielsen Company (Us), Llc Methods and apparatus to measure exposure to streaming media
CN112989606A (en) * 2021-03-16 2021-06-18 上海哥瑞利软件股份有限公司 Data algorithm model checking method, system and computer storage medium

Also Published As

Publication number Publication date
EP2126727A4 (en) 2010-04-14
EP2126727A1 (en) 2009-12-02

Similar Documents

Publication Publication Date Title
US7873634B2 (en) Method and a system for automatic evaluation of digital files
WO2008110002A1 (en) A method and a system for automatic evaluation of digital files
Abonyi et al. Cluster analysis for data mining and system identification
US8909643B2 (en) Inferring emerging and evolving topics in streaming text
Milenova et al. SVM in oracle database 10g: removing the barriers to widespread adoption of support vector machines
Parapar et al. Relevance-based language modelling for recommender systems
Liao et al. A sample-based hierarchical adaptive K-means clustering method for large-scale video retrieval
Celeux et al. Variable selection in model-based clustering and discriminant analysis with a regularization approach
Kawakubo et al. Rapid feature selection based on random forests for high-dimensional data
da Costa et al. Using dynamical systems tools to detect concept drift in data streams
Cholewa et al. Estimation of the number of states for gesture recognition with Hidden Markov Models based on the number of critical points in time sequence
Birlutiu et al. Efficiently learning the preferences of people
Baniya et al. Importance of audio feature reduction in automatic music genre classification
Al-Shalabi New feature selection algorithm based on feature stability and correlation
Wong et al. Feature selection and feature extraction: Highlights
Fránay et al. Valid interpretation of feature relevance for linear data mappings
Ah-Pine et al. Similarity based hierarchical clustering with an application to text collections
Kotropoulos et al. Multimedia social search based on hypergraph learning
Balkema et al. Music playlist generation by assimilating GMMs into SOMs
CA2581466C (en) A method and a system for automatic evaluation of digital files
Lamba et al. Ranking of classification algorithm in breast Cancer based on estrogen receptor using MCDM technique
JP6648828B2 (en) Information processing system, information processing method, and program
Thirukumaran et al. Improving accuracy rate of imputation of missing data using classifier methods
Makhtar et al. Binary classification models comparison: On the similarity of datasets and confusion matrix for predictive toxicology applications
Kargas et al. Supervised learning via ensemble tensor completion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08733585

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 1779/MUMNP/2009

Country of ref document: IN

REEP Request for entry into the european phase

Ref document number: 2008733585

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2008733585

Country of ref document: EP