CA2767504A1 - A method of constructing a mixture model - Google Patents

A method of constructing a mixture model Download PDF

Info

Publication number
CA2767504A1
CA2767504A1 CA2767504A CA2767504A CA2767504A1 CA 2767504 A1 CA2767504 A1 CA 2767504A1 CA 2767504 A CA2767504 A CA 2767504A CA 2767504 A CA2767504 A CA 2767504A CA 2767504 A1 CA2767504 A1 CA 2767504A1
Authority
CA
Canada
Prior art keywords
subset
component
subsets
dataset
mixture model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA2767504A
Other languages
French (fr)
Inventor
Robert Edward Callan
Brian Larder
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
General Electric Co
Original Assignee
General Electric Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by General Electric Co filed Critical General Electric Co
Publication of CA2767504A1 publication Critical patent/CA2767504A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases

Abstract

A method of constructing a general mixture model (100) of a dataset includes partitioning the dataset into at least two subsets (104) according to predefined criteria (108), generating a subset mixture model for each of the at least two subsets (110), and then combining the mixture models from each subset to generate a general mixture model (120).

Description

248634-3, A METHOD OF CONSTRUCTING A MIXTURE MODEL
BACKGROUND OF THE INVENTION

Data mining is a technology used to extract information and value from data.
Data mining algorithms are used in many applications such as predicting shoppers' spending habits for targeted marketing, detecting credit card fraudulent transactions, predicting a customer's navigation path through a website, failure detection in machines, etc. Data mining uses a broad range of algorithms that have been developed over many years by the Artificial Intelligence (Al) and statistical modeling communities. There are many different classes of algorithms but they all share some common features such as (a) a model that represents (either implicitly or explicitly) knowledge of the data domain, (b) a model building or learning phase that uses training data to construct a model, and (3) an inference facility that takes new data and applies a model to the data to make predictions.
A known example is a linear regression model where a first variable is predicted from a second variable by weighting the value of the second variable and summing the weighted value with a constant value. The weight and constant values are parameters of the model.
Mixture models are commonly used models for data mining applications within the academic research community as describe by G McLachlan and D Peel in Finite Mixture Models, John Wiley & Sons, (2000). There are variations on the class of mixture model such a Mixtures of Experts and Hierarchical Mixtures of Experts. There are also well documented algorithms for building mixture models. One example is Expectation Maximization (EM). Such mixture models are generally constructed by identifying clusters or components in the data and fitting appropriate mathematical functions to each of the clusters.

BRIEF DESCRIPTION OF THE INVENTION

In one aspect, a method of generating a general mixture model of a dataset stored in a non-transitory medium comprises the steps of providing subset criteria for defining subsets of the dataset, partitioning in a processor the dataset into at least two subsets based on the subset criteria, generating a subset mixture model for each of the at least two subsets, and combining the subset mixture model for each of the at least two subsets into a general mixture model.

BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings:

FIG. 1 is a flow chart depicting a method of generating a general mixture model according to one embodiment of the present invention.

FIG. 2 is a flow chart depicting a method of filtering components from subset mixture models as part of the method depicted in FIG. 1.

FIG. 3 is a chart depicting an example of filtering of a dataset according to the method of generating a general mixture model of FIG. 1.

FIG. 4 is a chart depicting a subset mixture model of a first subset.
FIG. 5 is a chart depicting a subset mixture model of a second subset.

FIG. 6 is a chart depicting a general mixture model of constructed by the method disclosed in FIG. 1.

DETAILED DESCRIPTION

In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the technology described herein. It will be evident to one skilled in the art, however, that the exemplary embodiments may be practiced without these specific details. In other instances, structures and device are shown in diagram form in order to facilitate description of the exemplary embodiments.

The exemplary embodiments are described below with reference to the drawings.
These drawings illustrate certain details of specific embodiments that implement the module, method, and computer program product described herein. However, the drawings should not be construed as imposing any limitations that may be present in the drawings. The method and computer program product may be provided on any machine-readable media for accomplishing their operations. The embodiments may be implemented using an existing computer processor, or by a special purpose computer processor incorporated for this or another purpose, or by a hardwired system.

As noted above, embodiments described herein include a computer program product comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media, which can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of machine-executable instructions or data structures and that can be accessed by a general purpose or special purpose computer or other machine with a processor. When information is transferred or provided over a network or another communication connection (either hardwired, wireless, or a combination of hardwired or wireless) to a machine, the machine properly views the connection as a machine-readable medium. Thus, any such a connection is properly termed a machine-readable medium. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions comprise, for example, instructions and data, which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions.

Embodiments will be described in the general context of method steps that may be implemented in one embodiment by a program product including machine-executable instructions, such as program code, for example, in the form of program modules executed by machines in networked environments. Generally, program modules include routines, programs, objects, components, data structures, etc. that have the technical effect of performing particular tasks or implement particular abstract data types.
Machine-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the method disclosed herein.
The particular sequence of such executable instructions or associated data structures represent examples of corresponding acts for implementing the functions described in such steps.

Embodiments may be practiced in a networked environment using logical connections to one or more remote computers having processors. Logical connections may include a local area network (LAN) and a wide area network (WAN) that are presented here by way of example and not limitation. Such networking environments are commonplace in office-wide or enterprise-wide computer networks, intranets and the internet and may use a wide variety of different communication protocols. Those skilled in the art will appreciate that such network computing environments will typically encompass many types of computer system configuration, including personal computers, hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like.

Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination of hardwired or wireless links) through a communication network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
An exemplary system for implementing the overall or portions of the exemplary embodiments might include a general purpose computing device in the form of a computer, including a processing unit, a system memory, and a system bus, that couples various system components including the system memory to the processing unit.
The system memory may include read only memory (ROM) and random access memory (RAM). The computer may also include a magnetic hard disk drive for reading from and writing to a magnetic hard disk, a magnetic disk drive for reading from or writing to a removable magnetic disk, and an optical disk drive for reading from or writing to a removable optical disk such as a CD-ROM or other optical media. The drives and their associated machine-readable media provide nonvolatile storage of machine-executable instructions, data structures, program modules and other data for the computer.

Technical effects of the method disclosed in the embodiments include more efficiently providing accurate models for mining complex data sets for predictive patterns. The method introduces a high degree of flexibility for exploring data from different perspectives using essentially a single algorithm that is tasked to solve different problems. Consequently, the technical effect includes more efficient data exploration, anomaly detection, regression for predicting values and replacing missing data, and segmentation of data. Examples of how such data can be efficiently explored using the disclosed method include targeted marketing based on customers' buying habits, reducing credit risk by identifying risky credit applicants, and predictive maintenance from understanding an aircraft's state of health.

The present invention is related to generating a general mixture model of a dataset. More particularly, the dataset is partitioned into two or more subsets, a subset mixture model is generated for each subset, and then the subset mixture models are combined to generate the general mixture model of the dataset.

Referring now to FIG. 1, the method of generating a general mixture model 100 is disclosed. First a dataset contained in a database 102 along with subset criteria 108 are provided for generating subsets with a subset identification 104. The database with the constituent dataset can be stored in an electronic memory. The dataset can contain multiple dimensions or parameters with each dimension having one or more values associated with it. The values can be either discrete values or continuous values. For example, a dataset can comprise a dimension of gas turbine engine with discrete values of CFM56, CF6, CF34, GE90, and GEnx. The discrete values represent various models of gas turbine engines manufactured and sold by General Electric Corporation. The dataset can further comprise another dimension titled air frame with discrete values of B737-700, B737700ER, B747-8, B777-200LR, B777-300ER, and B787, representing various airframes on which the gas turbine engines of the gas turbine engine dimension of the dataset can be mounted. Continuing with this example, the dataset may further comprise a dimension titled thrust with continuous values, such as values in the range of 18,000 pounds-force to 115,000 pounds force (80 kN - 512 kN).

The subset criteria 108 can be one or more values of one or more dimensions of the dataset that can be used to filter the dataset. The subset criteria can be stored in a relational database or designated by any other known method. Generally, the subset criteria 108 is formulated by the user of the dataset, based on what the user wants to learn from the dataset. The subset criteria 108 can contain any number of individual criteria for filtering and partitioning the data in the dataset. Continuing with the example above, subset criteria 108 may comprise three different elements such as GE90 engines mounted on B747-8, GEnx engine mounted on a B777-300ER, and a GEnx mounted on B787.
Although this is an example of a two dimensional subset criteria with three elements, the subset criteria may include any number of dimensions up to the number of dimensions in the dataset and may contain any number of elements.

Generating the subsets and subset identification 104 comprises filtering through the dataset and identifying each element within each of the subsets. The number of subsets is equivalent to the number of elements in the selection criteria. The filtering process may be accomplished by a computer software element running on a processor with access to the electronic memory containing the database 102. After or contemporaneous with the filtering, each of the subsets is assigned a subset identifier to distinguish the subset and its constituent elements from each of the other subsets and their constituent elements. The subset identifier can be a text string or any other known method of identifying the subsets generated at 104.

It is next assessed if there is at least one subset at 106. If there is not at least one subset, then the method 100 returns to 108 to accept new subset criteria that produce at least one subset. If there is at least one subset, then the method 100 generates a mixture model for each of the subsets at 110. The generation of mixture models is also commonly referred to as training in the field of data mining. The mixture model for each of the subsets can be generated by any known method and as any known type of mixture model, a non-limiting example being a Gaussian Mixture Model trained using expectation maximization (EM). The process of generating a mixture model for each subset results in a mathematical functional that represents the subset density. In the example of modeling continuous random vectors, the mathematical functional representation of each of the subsets is a scaled summation of probability density functions (pdf). Each of the pdf corresponds to a component or cluster of data elements within the subset for which the mixture model is being generated. In other words, the method of generating a mixture model of each of the subsets 110 is conducted by a software element running on a processor, where the software element considers all data elements within the subset, clusters the data elements into one or more components, fits a pdf to each of components, and ascribes a scaling factor to each of the components to generate a mathematical functional representation of the data. A non-limiting example of a mixture model is a Gaussian or Normal distribution mixture model of the form:

K
p(X)-I'TkN(XPk'Ykk=]
where p(X) is a mathematical functional representation of the subset, Xis a multidimensional vector representation of the variables, k is an index referring to each of the components in the subset, K is the total number of components in the subset, lTk is a scalar scaling factor corresponding to cluster k with the sum of all irk for all K
clusters equaling 1, N(X uk , yk) is a normal probability density function of vector X for a component mean Pk and covariance Ek.

If the vector X is of a single dimension, then Ek is the variance of X and if X has two or greater dimensions, then Ek is a covariance matrix of X.

After the mixture models are generated for each subset at 110 it is determined if there are at least two subsets at 112. If there are not at least two subsets, then the single subset mixture model generated at 110 is the general mixture model. If, however, it is determined that there are at least two subsets at 112, then it is next determined if filtering of the model components is desired at 116. If filtering is desired at 116, then one or more components are removed from the model at 118. The filtering method of 118 is described in greater detail in conjunction with FIG. 2. Once the filtering is done at 118 or if filtering was not desired at 116, then the method 100 proceeds to 120 where the subset models are combined.

Combining subset models at 120 can comprise concatenating the mixture models generated for each of the subsets to generate a combined model. Alternatively, the combining subset models can comprise independently scaling each of the mixture models of the individual subsets prior to concatenating each of the mixture models to generate a combined model.

At 122, it is determined if simplification of the model is desired. If simplification is not desired at 122, then the combined subset model is the general model at 124. If simplification is desired at 122, then a simplification of the combined model is performed at 126 and the simplified combined model is considered the general model at 128. The simplification 126 can comprise combining one or more clusters from two or more different subsets. The simplification 126 can further comprise removing one or more components from the combined mixture models of the subsets.

Referring now to FIG. 2 the method of filtering the components of the individual subset mixture models at 118, prior to combining the subset mixture models, is described. First, a completed list for tabulating each component and associated distances to other components is cleared at 140. Next, all of the components from all of the subsets are received by a processor and associated electronic memory at 142. A component from all of the components is selected at 144 and the distance of the selected component to all other components in other subsets is determined at 146. In other words, the selected component is compared to all other components with a subset identifier that is different from the subset identifier of the selected component. The distance can be computed by any known method including, but not limited to, the Kullback Leibler divergence. The component and the associated distances to all the other components of other subsets are tabulated and appended to the completed list at 148. In other words, the completed list contains the distance from the component to all components of the other subsets. At 150, it is determined if the selected component is the last component. If it is not, then the method 118 returns to 144 to select the next component. If, however, at 150 it is determined that the selected component is the last component, then the completed list is updated for all of the components of all of the subsets and the method proceeds to 152, where the completed list is sorted in descending order of the distances calculated at 146.
At 154, the top component on the completed list, or the component that has the greatest distance to all the other components of all the other subsets, is removed or filtered out.
At 156, it is determined if filtering criteria have been satisfied. The filtering criteria, for example, can be a predetermined total number of components to be filtered.
Alternatively, the filtering criteria can be the filtering of a predetermined percentage of the total number of components. If the filtering criteria are met at 156, then the final component set is identified at 160. If, however, the filtering criteria are not met at 156, then it is determined at 158 if iterative filtering is desired. The desire for iterative filtering can be set by the user of the method 118. If iterative filtering is not desired at 158, then the method returns to 154 to remove from the remaining components, the component with the greatest distance to all other components from other subsets. At 158, if it is determined that iterative filtering is desired, then the method 118 returns to 140.
Iterative filtering means that the method 118 recalculates the distances for each component to every other component and generates a new completed list by executing 140 through 152 every time a component is removed from the mixture model. The distances between components can change and, therefore, the relative order of the components on the completed list can change as components are removed from the mixture model. Therefore, by executing iterative filtering, one can ensure with greater confidence that the component being removed is the component with the greatest distance to the components from every other subset. However, in some cases, one may not want to execute iterative filtering, because iterative filtering is more computationally intensive and, therefore, more time consuming. In other words, when executing the filtering method 118 disclosed herein, one may assess the trade-off between filtering performance and time required to filter to determine if iterative filtering is desired at 158.

FIGS. 3-6 depict an example of executing the foregoing method 100 of generating a general mixture model. In FIG. 3, data 180 and 190 from a dataset is plotted against a variable xi. The data is further partitioned into a first subset 180 depicted as open circles on the graph and a second subset 190 depicted as closed triangles on the graph according to the procedures described in conjunction with 104 of method 100. Although the method 100 can be applied to multivariate analysis with many subsets, a single variable data dependency with only two subsets is depicted in this example for simplicity in visualizing the method 100.

FIGS. 4 and 5 depict the generation of a mixture model as at step 110 for the first subset 180 and second subset 190, respectively. In the case of the first subset 180, three components are identified and each is fit to a scaled Gaussian distribution G1, G2, and G3 with means I, t2, and 13, respectively. In the case of the second subset 190, two components are identified and each is fit to a scaled Gaussian distribution G4 and G5 with means .I4 and s, respectively. Thus, the mixture model of the first subset 180 is represented by the envelope of the scaled fitting function of the constituent components G1, G2, and G3. Similarly, the mixture model of the second subset 190 is represented by the envelope of the scaled fitting function of the constituent components G4 and G5. In FIG. 6, the combined constituent scaled fitting functions of the general mixture model are depicted, as at step 120 of the method 100, after filtering. In this example, it can be seen that in the filtering step 118, it was found that the component with fitting function G3 was at a distance from the components of the other subset G4 and G5 that exceeded some predetermined value (not shown), and therefore the component G3 was removed from the general mixture model of FIG. 6.

This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to make and use the invention. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.

Claims (10)

1. A method of generating a general mixture model (100) of a dataset stored in a non-transitory medium (102) comprising the steps of:
providing subset criteria (108) for defining subsets of the dataset;
partitioning in a processor the dataset into at least two subsets based on the subset criteria (108);
generating a subset mixture model (110) for each of the at least two subsets;
and combining the subset mixture model for each of the at least two subsets into the general mixture model (120).
2. The method of claim 1 wherein the subset criteria include one of being defined in a relational database and filtering the dataset by at least one dimension.
3. The method of claims 1 or 2 wherein the generating step includes at least one of identifying at least one component of a subset (104), fitting a function to at least one component of a subset, scaling fitting functions by a scaling factor, and summing scaled fitting functions.
4. The method of claim 3 wherein the function is a probability density function.
5. The method of claim 4 wherein the probability density function is a normal distribution function.
6. The method of claim 3 wherein the scaling factor is a scalar value.
7. The method of claim 4 wherein the sum of all of the scaling factors corresponding to each of the fitting functions of a subset is 1.
8. The method of claims 1 or 2 wherein the combining step (120) comprises concatenating the subset mixture models for each of the at least one subset, independently scaling the subset mixture models for each of the at least one subset and then concatenating the scaled subset mixture models, and removing one or more component functions prior to combining the subset mixture models (150).
9. The method of claim 8 wherein removing of one or more component functions (150) prior to combining the subset mixture models comprises selecting a component and determining the distance between the selected component and all of the components from subsets other than the subset corresponding to the selected component (144).
10. The method of claim 9 wherein the removing of one or more component functions prior to combining the subset mixture models (150) further comprises removing the component with the greatest distance.
CA2767504A 2011-02-15 2012-02-14 A method of constructing a mixture model Abandoned CA2767504A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/027,829 US20120209880A1 (en) 2011-02-15 2011-02-15 Method of constructing a mixture model
US13/027,829 2011-02-15

Publications (1)

Publication Number Publication Date
CA2767504A1 true CA2767504A1 (en) 2012-08-15

Family

ID=45655746

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2767504A Abandoned CA2767504A1 (en) 2011-02-15 2012-02-14 A method of constructing a mixture model

Country Status (7)

Country Link
US (1) US20120209880A1 (en)
EP (1) EP2490139B1 (en)
JP (1) JP6001871B2 (en)
CN (1) CN102693265B (en)
BR (1) BR102012003344A2 (en)
CA (1) CA2767504A1 (en)
IN (1) IN2012DE00401A (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6316844B2 (en) * 2012-12-22 2018-04-25 エムモーダル アイピー エルエルシー User interface for predictive model generation
CA2932069A1 (en) 2013-11-29 2015-06-04 Ge Aviation Systems Limited Method of construction of anomaly models from abnormal data
CN106156857B (en) * 2015-03-31 2019-06-28 日本电气株式会社 The method and apparatus of the data initialization of variation reasoning
CN106156077A (en) * 2015-03-31 2016-11-23 日本电气株式会社 The method and apparatus selected for mixed model
US10817796B2 (en) * 2016-03-07 2020-10-27 D-Wave Systems Inc. Systems and methods for machine learning
CN107644279A (en) * 2016-07-21 2018-01-30 阿里巴巴集团控股有限公司 The modeling method and device of evaluation model
CN109559214A (en) 2017-09-27 2019-04-02 阿里巴巴集团控股有限公司 Virtual resource allocation, model foundation, data predication method and device
CN109657802B (en) * 2019-01-28 2020-12-29 清华大学深圳研究生院 Hybrid expert reinforcement learning method and system
CN112990337B (en) * 2021-03-31 2022-11-29 电子科技大学中山学院 Multi-stage training method for target identification

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6449612B1 (en) * 1998-03-17 2002-09-10 Microsoft Corporation Varying cluster number in a scalable clustering system for use with large databases
US6263337B1 (en) * 1998-03-17 2001-07-17 Microsoft Corporation Scalable system for expectation maximization clustering of large databases
US7039239B2 (en) * 2002-02-07 2006-05-02 Eastman Kodak Company Method for image region classification using unsupervised and supervised learning
US7299135B2 (en) * 2005-11-10 2007-11-20 Idexx Laboratories, Inc. Methods for identifying discrete populations (e.g., clusters) of data within a flow cytometer multi-dimensional data set
US7664718B2 (en) * 2006-05-16 2010-02-16 Sony Corporation Method and system for seed based clustering of categorical data using hierarchies
US8432449B2 (en) * 2007-08-13 2013-04-30 Fuji Xerox Co., Ltd. Hidden markov model for camera handoff
JP2009086581A (en) * 2007-10-03 2009-04-23 Toshiba Corp Apparatus and program for creating speaker model of speech recognition
US8521659B2 (en) * 2008-08-14 2013-08-27 The United States Of America, As Represented By The Secretary Of The Navy Systems and methods of discovering mixtures of models within data and probabilistic classification of data according to the model mixture
US8493409B2 (en) * 2009-08-18 2013-07-23 Behavioral Recognition Systems, Inc. Visualizing and updating sequences and segments in a video surveillance system
CN101882150B (en) * 2010-06-09 2012-09-26 南京大学 Three-dimensional model comparison and search method based on nuclear density estimation
US8571328B2 (en) * 2010-08-16 2013-10-29 Adobe Systems Incorporated Determining correspondence between image regions

Also Published As

Publication number Publication date
CN102693265A (en) 2012-09-26
EP2490139A1 (en) 2012-08-22
EP2490139B1 (en) 2020-04-01
JP2012168949A (en) 2012-09-06
IN2012DE00401A (en) 2015-06-05
BR102012003344A2 (en) 2015-08-04
US20120209880A1 (en) 2012-08-16
CN102693265B (en) 2017-08-25
JP6001871B2 (en) 2016-10-05

Similar Documents

Publication Publication Date Title
EP2490139B1 (en) A method of constructing a mixture model
Olmezogullari et al. Pattern2Vec: Representation of clickstream data sequences for learning user navigational behavior
US20230306505A1 (en) Extending finite rank deep kernel learning to forecasting over long time horizons
CN112883070A (en) Generation type countermeasure network recommendation method with differential privacy
Pandey et al. Stratified linear systematic sampling based clustering approach for detection of financial risk group by mining of big data
Dey et al. Graphical Gaussian process models for highly multivariate spatial data
Fu et al. Quasi-Newton Hamiltonian Monte Carlo.
Yan et al. A clustering algorithm for multi-modal heterogeneous big data with abnormal data
Gialampoukidis et al. Probabilistic density-based estimation of the number of clusters using the DBSCAN-martingale process
Petrovic et al. Learning the Markov order of paths in graphs
Falini et al. Spline based Hermite quasi-interpolation for univariate time series
Liu Analysis in big data of satellite communication network based on machine learning algorithms
Hasanin et al. Experimental Studies on the Impact of Data Sampling with Severely Imbalanced Big Data
Taratukhin et al. Meta-learning based feature selection for clustering
Petre Enhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques.
Li et al. Forecasting firm risk in the emerging market of China with sequential optimization of influence factors on performance of case‐based reasoning: an empirical study with imbalanced samples
Mathivanan et al. Text Classification of E-Commerce Product via Hidden Markov Model.
Syed Combining Association Rule Mining and Clustering
Hrstic Pattern analysis of the user behaviour in a mobile application using unsupervised machine learning
Beebe A Complete Bibliography of Publications in the ORSA Journal on Computing and the INFORMS Journal on Computing
Goliforoushani et al. A Fuzzy Community-Based Recommender System Using PageRank
Ramya et al. Performance Evaluation of Learning by Example Techniques over Different Datasets
Jun Text Data Analysis Using Generalized Linear Mixed Model and Bayesian Visualization. Axioms 2022, 11, 674
Shao et al. Sparse assortment personalization in high dimensions. JUSTC, 2022, 52 (3): 5. DOI: 10.52396
He et al. CR-IIA: Collaborative Recommendation Algorithm Based on Implicit Item Associations

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20161209

FZDE Discontinued

Effective date: 20200214