EP1464013B1 - Systems, methods, and software for classifying documents - Google Patents

Systems, methods, and software for classifying documents Download PDF

Info

Publication number
EP1464013B1
EP1464013B1 EP02786640A EP02786640A EP1464013B1 EP 1464013 B1 EP1464013 B1 EP 1464013B1 EP 02786640 A EP02786640 A EP 02786640A EP 02786640 A EP02786640 A EP 02786640A EP 1464013 B1 EP1464013 B1 EP 1464013B1
Authority
EP
European Patent Office
Prior art keywords
target
class
text
noun
input text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP02786640A
Other languages
German (de)
French (fr)
Other versions
EP1464013A2 (en
Inventor
Khalid Al-Kofahi
Peter Jackson
Timothy Earl Travers
Alex Tyrell
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Reuters Global Resources ULC
Original Assignee
Thomson Reuters Global Resources ULC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Reuters Global Resources ULC filed Critical Thomson Reuters Global Resources ULC
Priority to EP08017291A priority Critical patent/EP2012240A1/en
Publication of EP1464013A2 publication Critical patent/EP1464013A2/en
Application granted granted Critical
Publication of EP1464013B1 publication Critical patent/EP1464013B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99942Manipulating data structure, e.g. compression, compaction, compilation

Definitions

  • the present invention concerns systems, methods, and software for classifying text and documents, such as headnotes of judicial opinions.
  • West Group For example, creates and classifies headnotes --short summaries of points made in judicial opinions-- using its proprietary West Key NumberTM System. (West Key Number is a trademark of West Group.)
  • the West Key Number System is a hierarchical classification of over 20 million headnotes across more than 90,000 distinctive legal categories, or classes. Each class has not only a descriptive name, but also a unique alpha-numeric code, known as its Key Number classification.
  • ALR American Law Reports
  • the conventional technique entails selecting cases that have headnotes in certain classes of the West Key Number System as candidates for citations in corresponding annotations.
  • the candidate cases are then sent to professional editors for manual review and final determination of which should be cited to the corresponding annotations.
  • this simplistic mapping of classes to annotations not only sends many irrelevant cases to the editors, but also fails to send many that are relevant, both increasing the workload of the editors and limiting accuracy of the updated annotations.
  • one exemplary system aids in classifying headnotes to the ALR annotations; another aids in classifying headnotes to sections of American Jurisprudence (another encyclopedic style legal reference); and yet another aids in classifying headnotes to the West Key Number System.
  • these and other embodiments are applicable to classification of other types of documents, such as emails.
  • a computer-implemented method of classifying input text to a target classification system having two or more target classes comprising: for each target class:
  • some of the exemplary systems classify or aid manual classification of an input text by determining a set of composite scores, with each composite score corresponding to a respective target class in the target classification system. Determining each composite score preferably entails computing and applying class-specific weights to at least two of the following types of scores:
  • document refers to any addressable collection or arrangement of machine-readable data.
  • database includes any logical collection or arrangement of documents.
  • Figure 1 shows a diagram of an exemplary document classification system 100 for automatically classifying or recommending classifications of electronic documents according to a document classification scheme.
  • the exemplary embodiment classifies or recommends classification of cases, case citations, or associated headnotes, to one or more of the categories represented by 13,779 ALR annotations. (The total number of annotation is growing at a rate on the order of 20-30 annotations per month.)
  • the present invention is not limited to any particular type of documents or type of classification system.
  • the exemplary embodiment is presented as an interconnected ensemble of separate components, some other embodiments implement their functionality using a greater or lesser number of components. Moreover, some embodiments intercouple one or more the components through a local- or wide-area network. (Some embodiments implement one or more portions of system 100 using one or more mainframe computers or servers.) Thus, the present invention is not limited to any particular functional partition.
  • System 100 includes an ALR annotation database 110, a headnotes database 120, and a classification processor 130, a preliminary classification database 140, and editorial workstations 150.
  • ALR annotation database 110 (more generally a database of electronic documents classified according to a target classification scheme) includes a set of 13,779 annotations, which are presented generally by annotation 112. The exemplary embodiment regards each annotation as a class or category.
  • Each citation identifies or is associated with at least one judicial opinion (or generally an electronic document), such as electronic judicial opinion (or case) 115.
  • Judicial opinion 115 includes and/or is associated with one or more headnotes in headnote database 120, such as headnotes 122 and 124. (In the exemplary embodiment, a typical judicial opinion or case has about 6 associated headnotes, although cases having 50 or more are not rare.)
  • a sample headnote and its assigned West Key Number class identifier are shown below.
  • headnote database 120 includes about 20 million headnotes and grows at an approximate rate of 12,000 headnotes per week. About 89% of the headnotes are associated with a single class identifier, about 10% with two class identifiers, and about 1 % with more than two class identifiers.
  • headnote database 120 includes a number of headnotes, such as headnotes 126 and 128, that are not yet assigned or associated with an ALR annotation in database 110.
  • the headnotes are associated with class identifiers.
  • headnote 126 is associated with class identifiers 126.1 and 126.2
  • headnote 128 is associated with class identifier 128.1.
  • Classification processor 130 includes classifiers 131, 132, 133, and 134, a composite-score generator 135, an assignment decision-maker 136, and decision-criteria module 137. Processor 130 determines whether one or more cases associated with headnotes in headnote database 120 should be assigned to or cited within one or more of the annotations of annotation database 110. Processor 130 is also coupled to preliminary classification database 140.
  • Preliminary classification database 140 stores and/or organizes the assignment or citation recommendations.
  • the recommendations can be organized as a single first-in-first-out (FIFO) queue, as multiple FIFO queues based on single annotations or subsets of annotations.
  • the recommendations are ultimately distributed to work center 150.
  • Work center 150 communicates with preliminary classification database 140 as well as annotation database 110 and ultimately assists users in manually updating the ALR annotations in database 110 based on the recommendations stored in database 140.
  • work center 150 includes workstations 152, 154, and 156.
  • Workstation 152 which is substantially identical to workstations 154 and 156, includes a graphical-user interface 152.1, and user-interface devices, such as a keyboard and mouse (not shown.)
  • Figure 2 shows a flow chart 200 illustrating in greater detail an exemplary method of operating system 100.
  • Flow chart 200 includes a number of process blocks 210-250. Though arranged serially in the exemplary embodiment, other embodiments may reorder the blocks, omits one or more blocks, and/or execute two or more blocks in parallel using multiple processors or a single processor organized as two or more virtual machines or subprocessors. Moreover, still other embodiments implement the blocks as one or more specific interconnected hardware or integrated-circuit modules with related control and data signals communicated between and through the modules.
  • the exemplary process flow is applicable to software, firmware, hardware, and hybrid implementations.
  • the lower case letters a, h, and k respectively denote an annotation, a headnote, and a class or class identifier, such as a West Key Number class or class identifier.
  • the upper case letters A, H, and K respectively denote the set of all annotations, the set of all headnotes, and the set of all key numbers classifications.
  • variables denoting vector quantities are in bold-faced capital letters, and elements of the corresponding vectors are denoted in lower case letters. For example, V denotes a vector, and v denotes an element of vector V .
  • the exemplary method begins by representing the annotations in annotations database 110 (in Figure 1 ) as text-based feature vectors.
  • this entails representing each annotation a as a one-column feature vector, V a , based on the noun and/or noun-word pairs occurring in headnotes for the cases cited within the annotation. (Other embodiments represent the headnotes as bigrams or noun phrases.)
  • the exemplary embodiment selects from the set of all headnotes associated with the cited cases those that are most relevant to the annotation being represented. For each annotation, this entails building a feature vector using all the headnotes in all cases cited in the annotation and selecting from each case one, two, or three headnotes based on similarity between the headnotes in a cited case and those of the citing annotation and denoting the most similar headnote(s) as relevant.
  • the exemplary embodiment uses classifiers 131-134 to compute similarity scores, averages the four scores for each headnote, and defines as most relevant the highest scoring headnote plus those with a score of at least 80% of the highest score. The 80% value was chosen empirically.
  • Figure 3 shows an example of a headnote 310 and a noun-word representation 320 in accord with the exemplary embodiment. Also shown are West Key Number classification text 330 and class identifier 340.
  • v a t ⁇ f a ⁇ * id ⁇ f a ⁇
  • t ⁇ f a ⁇ denotes the term frequency (that is, the total number of occurrences) of the term or noun-word pair associated with annotation a.
  • this is the number of occurrences of the term within the set of headnotes associated with the annotation.
  • id ⁇ f a ⁇ denotes the inverse document frequency for the associated term or noun-word pair.
  • id ⁇ f a ⁇ log N d ⁇ f a ⁇ , where N is the total number of headnotes (for example, 20 million) in the collection, and d ⁇ f a ⁇ is the number of headnotes (or more generally documents) containing the term or noun-word pair.
  • N is the total number of headnotes (for example, 20 million) in the collection
  • d ⁇ f a ⁇ is the number of headnotes (or more generally documents) containing the term or noun-word pair.
  • the prime ' notation indicates that these frequency parameters are based on proxy text, for example, the text of associated headnotes, as opposed to text of the annotation itself. (However, other embodiments may use all or portions of text from the annotation alone or in combination with proxy text, such as headnotes or other related documents:)
  • annotation-text vectors can include a large number of elements. Indeed, some annotation vectors can include hundreds of thousands of terms or noun-word pairs, with the majority of them having a low term frequency. Thus, not only to reduce the number of terms to a manageable number, but also to avoid the rare-word problem known to exist in vector-space models, the exemplary embodiment removes low-weight terms.
  • the exemplary embodiment removes as many low-weight terms as necessary to achieve a lower absolute bound of 500 terms or a 75% reduction in the length of each annotation vector.
  • the effect of this process on the number of terms in an annotation vector depends on their weight distribution. For example, if the terms have similar weights, approximately 75% of the terms will be removed. However, for annotations with skewed weight distributions, as few as 10% of the terms might be removed. In the exemplary embodiment, this process decreased the total number of unique terms for all annotation vectors from approximately 70 million to approximately 8 million terms.
  • Some other embodiments use other methods to limit vector size. For example, some embodiments apply a fixed threshold on the number of terms per category, or on the term's frequency, document frequency, or weight. These methods are generally efficient when the underlying categories do not vary significantly in the feature space. Still other embodiments perform feature selection based on measures, such as mutual information. These methods, however, are computationally expensive. The exemplary method attempts to strike a balance between these two ends.
  • Block 220 executed after representation of the annotations as text-based feature vectors, entails modeling one or more input headnotes from database 120 (in Figure 1 ) as a set of corresponding headnote-text vectors.
  • the input headnotes include headnotes that have been recently added to headnote database 120 or that have otherwise not previously been reviewed for relevance to the ALR annotations in database 110.
  • each input headnote h as a vector V h , with each element v h , like the elements of the annotation vectors, associated with a term or noun-word pair in the headnote.
  • Block 230 the exemplary method continues with operation of classification processor 130 (in Figure 1 ).
  • Figure 2 shows that block 230 itself comprises sub-process blocks 231-237.
  • Block 231 which represents operation of classifier 131, entails computing a set of similarity scores based on the similarity of text in each input headnote text to the text associated with each annotation. Specifically, the exemplary embodiment measures this similarity as the cosine of the angle between the headnote vector V h and each annotation vector V a .
  • Block 232 which represents operation of classifier 132, entails determining a set of similarity scores based on the similarity of the class identifiers (or other meta-data) associated with the input headnote and those associated with each of the annotations.
  • each annotation a is represented as an annotation-class vector V a C vector, with each element v a C indicating the weight of a class identifier assigned to the headnotes cited by the annotation.
  • Each element v a C is defined as v a C where t ⁇ f a C denotes the frequency of the associated class identifier, and id ⁇ f a C , denotes its inverse document frequency.
  • each input headnote is also represented as a headnote-class vector V h C , with each element indicating the weight of a class or class identifier assigned to the headnote.
  • id ⁇ f h C log N C d ⁇ f a C , where N c is the total number of classes or class identifiers and df h is the frequency of the class or class identifier amongst the set of class or class identifiers associated with the annotation.
  • the exemplary embodiment considers each class identifier separately of the others for that headnote, ultimately using the one yielding the maximum class-identifier similarity.
  • the maximization criteria is used because, in some instances, a headnote may have two or more associated class identifiers (or Key Number classifications), indicating its discussion of two or more legal points. However, in most cases, only one of the class identifiers is relevant to a given annotation.
  • a P k h
  • a max k ⁇ ⁇ h h ⁇ P k ⁇
  • a where ⁇ k ⁇ h denotes the set of class identifiers assigned to headnote h .
  • Each annotation conditional class probability P ( k / a ) is estimated by P k
  • a 1 + t ⁇ f k ⁇ a
  • the exemplary determination of similarity scores S 3 relies on assumptions that class identifiers are assigned to a headnote independently of each other, and that only one class identifier in ⁇ k ⁇ h is actually relevant to annotation a. Although the one-class assumption does not hold for many annotations, it improves the overall performance of the system.
  • the inferiority may stem from the fact that annotations are created at different times, and the fact that one annotation has more citations than another does not necessarily mean it is more probable to occur for a given headnote. Indeed, a greater number of citations might only reflect that one annotation has been in existence longer and/or updated more often than another. Thus, other embodiments might use the prior probabilities based on the frequency that class numbers are assigned to the annotations.
  • classifier 134 determines a set of similarity scores S 4 , based on P ( a
  • h ) the probability of each annotation given the text of the input headnote.
  • t ) is defined according to Bayes' theorem as P a
  • t P t
  • P ( a ) and P ( a ') are assumed to be equal, P ( a
  • t P t
  • Block 235 which represents operation of composite-score generator 135, entails computing a set of composite similarity scores C ⁇ S a h based on the sets of similarity scores determined at blocks 231-235 by classifiers 131-135, with each composite score indicating the similarity of the input headnote h to each annotation a .
  • S a , i h denotes the similarity score of the i -th similarity score generator for the input headnote h and annotation a
  • w ia is a weight assigned to the i -th similarity score generator and annotation a .
  • assignment decision-maker 136 recommends that the input headnote or a document, such as a case, associated with the headnote be classified or incorporated into one or more of the annotations based on the set of composite scores and decision criteria within decision-criteria module 137.
  • the headnote is assigned to annotations according to the following decision rule: If C ⁇ S a h > ⁇ a , then recommend assignment of h or D h to annotation a , where ⁇ a is an annotation-specific threshold from decision-criteria module 137 and D h denotes a document, such as a legal opinion, associated with the headnote.
  • each ALR annotation includes the text of associated headnotes and its full case citation.
  • the annotation thresholds ⁇ a , a ⁇ A are also learned and reflect the homogeneity of an annotation. In general, annotations dealing with narrow topics tend to have higher thresholds than those dealing with multiple related topics.
  • the thresholds reflect that, over 90% of the headnotes (or associated documents) are not assigned to any annotations.
  • the exemplary embodiment estimates optimal annotation-classifier weights and annotation thresholds through exhaustive search over a five-dimensional space. The space is discretized to make the search manageable. The optimal weights are those corresponding to maximum precision at recall levels of at least 90%.
  • the exemplary embodiment effectively requires assignments to compete for their assigned annotations or target classifications.
  • This competition entails use of the following rule: Assign h to a , iff C ⁇ S a h > ⁇ ⁇ S ⁇ where a denotes an empirically determined value greater than zero and less than 1, for example, 0.8 ; ⁇ denotes the maximum composite similarity score associated with a headnote in ⁇ H a ⁇ , the set of headnotes assigned to annotation a.
  • Block 240 entails processing classification recommendations from classification processor 130.
  • processor 130 transfers classification recommendations to preliminary classification database 140 (shown in Figure 1 ).
  • Database 140 sorts the recommendation based on annotation, jurisdiction, or other relevant criteria and stores them in, for example, a single first-in-first-out (FIFO) queue, as multiple FIFO queue based on single annotations or subsets of annotations.
  • FIFO first-in-first-out
  • One or more of the recommendations are then communicated by request or automatically to workcenter 150, specifically workstations 152, 154, and 156.
  • workstations 152, 154, and 156 Each of the workstations displays, automatically or in response to user activation, one or more graphical-user interfaces, such as graphical-user interface 152.1.
  • FIG. 4 shows an exemplary form of graphical-user interface 152.1.
  • Interface 152.1 includes concurrently displayed windows or regions 410, 420, 430 and buttons 440-490.
  • Window 410 displays a recommendation list 412 of headnote identifiers from preliminary classification database 140. Each headnote identifier is logically associated with at least one annotation identifier (shown in window 430). Each of the listed headnote identifiers is selectable using a selection device, such as a keyboard or mouse or microphone. A headnote identifier 412.1 in list 412 is automatically highlighted, by for example, reverse-video presentation, upon selection.
  • window 420 displays a headnote 422 and a case citation 424, both of which are associated with each other and the highlighted headnote identifier 412.1.
  • window 430 displays at least a portion or section of an annotation outline 432 (or classification hierarchy), associated with the annotation designated by the annotation identifier associated with headnote 412.1.
  • Button 440 labeled "New Section,” allows a user to create a new section or subsection in the annotation outline. This feature is useful, since in some instances, a headnote suggestion is good, but does not fit an existing section of the annotation. Creating the new section or subsection thus allows for convenient expansion of the annotation.
  • Button 450 toggles on and off the display of a text box describing headnote assignments made to the current annotation during the current session.
  • the text box presents each assignment in a short textual form, such as ⁇ annotation or class identifier> ⁇ subsection or section identifier > ⁇ headnote identifier>. This feature is particularly convenient for larger annotation outlines that exceed the size of window 430 and require scrolling contents of the window.
  • Button 460 labeled "Un-Allocate,” allows a user to de-assign, or declassify, a headnote to a particular annotation. Thus, if a user changes her mind regarding a previous, unsaved, classification, the user can nullify the classification.
  • headnotes identified in window 410 are understood to be assigned to the particular annotation section displayed in window 430 unless the user decides that the assignment is incorrect or inappropriate.
  • acceptance of a recommendation entails automatic creation of hyperlinks linking the annotation to the case and the case to the annotation.
  • Button 470 labeled "Next Annotation,” allows a user to cause display of the set of headnotes recommended for assignment to the next annotation. Specifically, this entails not only retrieving headnotes from preliminary classification storage 140 and displaying them in window 410, but also displaying the relevant annotation outline within window 430.
  • Button 480 labeled "Skip Anno,” allows a user to skip the current annotation and its suggestions altogether and advance to the next set of suggestions and associated annotation. This feature is particularly useful when an editor wants another editor to review assignments to a particular annotation, or if the editor wants to review this annotation at another time, for example, after reading or studying the entire annotation text, for example.
  • the suggestions remain in preliminary classification database 140 until they are either reviewed or removed. (In some embodiments, the suggestions are time-stamped and may be supplanted with more current suggestions or deleted automatically after a preset period of time, with the time period, in some variations dependent on the particular annotation.)
  • Button 490 labeled "Exit,” allows an editor to terminate an editorial session. Upon termination, acceptances and recommendations are stored in ALR annotations database 110.
  • Block 250 entails updating of classification decision criteria.
  • this entails counting the numbers of accepted and rejected classification recommendations for each annotation, and adjusting the annotation-specific decision thresholds and/or classifier weights appropriately. For example, if 80% of the classification recommendations for a given annotation are rejected during one day, week, month, quarter or year, the exemplary embodiment may increase the decision threshold associated with that annotation to reduce the number of recommendations. Conversely, if 80% are accepted, the threshold may be lowered to ensure that a sufficient number of recommendations are being considered.
  • Figure 5 shows a variation of system 100 in the form of an exemplary classification system 500 tailored to facilitate classification of documents to one or more of the 135,000 sections of The American Jurisprudence (AmJur). Similar to an ALR annotation, each AmJur section cites relevant cases as they are decided by the courts. Likewise, updating AmJur is time consuming.
  • AmJur Similar to an ALR annotation, each AmJur section cites relevant cases as they are decided by the courts. Likewise, updating AmJur is time consuming.
  • classification system 500 includes six classifiers: classifiers 131-134 and classifiers 510 and 520, a composite score generator 530, and assignment decision-maker 540.
  • Classifiers 131-134 are identical to the ones used in system 100, with the exception that they operate on AmJur data as opposed to ALR data.
  • Classifiers 510 and 520 process AmJur section text itself, instead of proxy text based on headnotes cited within the AmJur section. More specifically, classifier 510 operates using the formulae underlying classifier 131 to generate similarity measurements based on the tf-idfs (term-frequency-inverse document frequency) of noun-word pairs in AmJur section text. And classifier 520 operates using the formulae underlying classifier 134 to generate similarity measurements based on the probabilities of a section text given the input headnote.
  • each classifier assigns each AmJur section a similarity score based on a numerical ranking of its respective set of similarity measurements.
  • each of the six classifiers effectively ranks the 135,000 AmJur sections according to their similarities to the headnote.
  • Table 1 shows a partial ranked listing of AmJur sections showing how each classifier scored, or ranked, their similarity to a given headnote.
  • Table 1 Partial Ranked Listing AmJur Sections based of Median of Six Similarity Scores Section C 1 Ranks C 2 Ranks C 3 Ranks C 4 Ranks C 5 Ranks C 6 Ranks Median Ranks Section_1 1 8 4 1 3 2 2.5 Section_2 3 2 5 9 1 3 3 Section_3 2 4 6 5 4 4 4 Section_4 5 1 3 8 6 1 4 Section_5 7 3 2 2 5 5 4 Section_6 4 5 1 7 2 9 4.5 Section_7 8 7 8 4 7 6 7 Section_8 6 9 7 3 10 7 7 Section_9 9 10 9 6 9 10 9 Section_10 10 6 10 10 8 8 9 9
  • Composite score generator 530 generates a composite similarity score for each AmJur section based on its corresponding set of six similarity scores. In the exemplary embodiment, this entails computing the median of the six scores for each AmJur section. However, other embodiments can compute a uniform or non-uniformly weighted average of all six or a subset of the six ranking. Still other embodiments can select the maximum, minimum, or mode as the composite score for the AmJur section. After generating the composite scores, the composite score generator forwards data identifying the AmJur section associated with the highest composite score, the highest composite score, and the input headnote to assignment decision-maker 540.
  • Assignment decision-maker 540 provides a fixed portion of headnote-classification recommendations to preliminary classification database 140, based on the total number of input headnotes per a fixed time period.
  • the fixed number and time period governing the number of recommendations are determined according to parameters within decision-criteria module 137. For example, one embodiment ranks all incoming headnotes for the time period, based on their composite scores and recommends only those headnotes that rank in the top 16 percent.
  • more than one headnote may have a composite score that equals a given cut-off threshold, such as top 16%.
  • a given cut-off threshold such as top 16%.
  • the exemplary embodiment re-orders all headnote-section pairs that coincide with the cut-off threshold, using the six actual classifier scores.
  • Z-scores are obtained by assuming that each classifier score has a normal distribution, estimating the mean and standard deviation of the distribution, and then subtracting the mean from the classifier score and dividing the result by the standard deviation.
  • the headnote-section pairs that meet the acceptance criteria are than re-ordered, or re-ranked, according to this new similarity measure, with as many as needed to achieve the desired number of total recommendations being forwarded to preliminary classification database 140. (Other embodiments may apply this "reordering" to all of the headnote-section pairs and then filter these based on the acceptance criteria necessary to obtain the desired number of recommendations.)
  • Figure 6 shows another variation of system 100 in the form of an exemplary classification system 600 tailored to facilitate classification of input headnotes to classes of the West Key Number System.
  • the Key Number System is a hierarchical classification system with 450 top-level classes, which are further subdivided into 92,000 sub-classes, each having a unique class identifier.
  • system 600 includes classifiers 131 and 134, a composite score generator 610, and an assignment decision-maker 620.
  • classifiers 131 and 134 model each input headnote as a feature vector of noun-word pairs and each class identifier as a feature vector of noun-word pairs extracted from headnotes assigned to it.
  • Classifier 131 generates similarity scores based on the tf-idf products for noun-word pairs in headnotes assigned to each class identifier and to a given input headnote.
  • classifier 134 generates similarity scores based on the probabilities of a class identifier given the input headnote.
  • system 600 generates over 184,000 similarity scores, with each scores representing the similarity of the input headnote to a respective one of the over 92,000 class identifiers in the West Key Number System using a respective one of the two classifiers.
  • Composite score generator 610 combines the two similarity measures for each possible headnote-class-identifier pair to generate a respective composite similarity score.
  • this entails defining, for each class or class identifier, two normalized cumulative histograms (one for each classifier) based on the headnotes already assigned to the class. These histograms approximate corresponding cumulative density functions, allowing one to determine the probability that a given percentage of the class identifiers scored below a certain similarity score.
  • M c denotes the set of headnotes already classified or associated with class or class identifier c ;
  • S i 1 denotes the similarity score for headnote h i and class-identifier c , as measured by classifier 131, and
  • S i 2 denote the similarity score for headnote h i and class-identifier c , as measured by classifier 134.
  • each similarity score indicates the similarity of a given assigned headnote to all the headnotes assigned to class c .
  • each histogram provides the percentage of assigned headnotes that scored higher and lower than that particular score. For example, for classifier 131, the histogram for class identifier c might show that 60% of the set of headnotes assigned to classifier c scored higher than 0.7 when compared to the set of headnotes as a whole; whereas for classifier 134 the histogram might show that 50% of the assigned headnotes scored higher than 0.7
  • composite score generator 610 converts each score for the input headnote into a normalized similarity score using the corresponding histogram and computes each composite score for each class based on the normalized scores.
  • this conversion entails mapping each classifier score to the corresponding histogram to determine its cumulative probability and then multiplying the cumulative probabilities of respective pairs of scores associated with a given class c to compute the respective composite similarity score.
  • the set of composite scores for the input headnote are then processed by assignment decisionmaker 620.
  • Assignment decision maker 620 forwards a fixed number of the top scoring class identifiers to preliminary classification database 140.
  • the exemplary embodiments suggest the class identifiers having the top five composite similarity scores for every input headnote.
  • the components of the various exemplary systems presented can be combined in myriad ways to form other classification systems of both greater and lesser complexity. Additionally, the components and systems can be tailored for other types of documents other than headnotes. Indeed, the components and systems and embodied teachings and principles of operation are relevant to virtually any text or data classification context.
  • Some mail classifying systems may include one or more classifiers in combination with conventional rules which classify messages as useful or SPAM based on whether the sender is in your address book, same domain as recipient, etc.
  • the inventors have presented various exemplary systems, methods, and software which facilitate the classification of text, such as headnotes or associated legal cases to a classification system, such as that represented by the nearly 14,000 ALR annotations.
  • the exemplary system classifiers or makes classification recommendations based on text and class similarities and probabilistic relations.
  • the system also provides a graphical-user interface to facilitate editorial processing of recommended classifications and thus automated update of document collections, such as the American Legal Reports, American Jurisprudence, and countless others.

Abstract

To reduce cost and improve accuracy, the inventors devised systems, methods, and software to aid classification of text, such as headnotes and other documents, to target classes in a target classification system. For example, one system computes composite scores based on: similarity of input text to text assigned to each of the target classes; similarly of non-target classes assigned to the input text and target classes; probability of a target class given a set of one or more non-target classes assigned to the input text; and/or probability of the input text given text assigned to the target to the target classes. The exemplary system then evaluates the composite scores using class-specific decision criteria, such as thresholds, ultimately assigning or recommending assignment of the input text to one or more of the target classes. The exemplary system is particularly suitable for classification systems having thousands of classes.

Description

    Technical Field
  • The present invention concerns systems, methods, and software for classifying text and documents, such as headnotes of judicial opinions.
  • Background
  • The American legal system, as well as some other legal systems around the world, relies heavily on written judicial opinions -the written pronouncements of judges- to articulate or interpret the laws governing resolution of disputes. Bach judicial opinion is not only important to resolving a particular legal dispute, but also to resolving similar disputes in the future. Because of this, judges and lawyers within our legal system are continually researching an ever-expanding body of past opinions, or case law, for the ones most relevant to resolution ofnew disputes.
  • To facilitate these searches, companies, such as West Publishing Company of St. Paul, Minnesota (doing business as West Group), not only collect and publish the judicial opinions of courts across the United States, but also summarize and classify the opinions based on the principles or points of law they contain. West Group, for example, creates and classifies headnotes --short summaries of points made in judicial opinions-- using its proprietary West Key Number™ System. (West Key Number is a trademark of West Group.)
  • The West Key Number System is a hierarchical classification of over 20 million headnotes across more than 90,000 distinctive legal categories, or classes. Each class has not only a descriptive name, but also a unique alpha-numeric code, known as its Key Number classification.
  • In addition to highly-detailed classification systems, such as the West Key Number System, judges and lawyers conduct research using products, such as American Law Reports (ALR), that provide in-depth scholarly analysis of a broad spectrum of legal issues. In fact, the ALR includes about 14,000 distinct articles, known as annotations, each teaching about a separate legal issue, such as double jeopardy and free speech. Each annotations also include citations and/or headnotes identifying relevant judicial opinions to facilitate further legal research.
  • To ensure their currency as legal-research tools, the ALR annotations are continually updated to cite recent judicial opinions (or cases). However, updating is a costly task given that courts across the country collectively issue hundreds of new opinions every day and that the conventional technique for identifying which of these cases are good candidates for citation is inefficient and inaccurate.
  • In particular, the conventional technique entails selecting cases that have headnotes in certain classes of the West Key Number System as candidates for citations in corresponding annotations. The candidate cases are then sent to professional editors for manual review and final determination of which should be cited to the corresponding annotations. Unfortunately, this simplistic mapping of classes to annotations not only sends many irrelevant cases to the editors, but also fails to send many that are relevant, both increasing the workload of the editors and limiting accuracy of the updated annotations.
  • Accordingly, there is a need for tools that facilitate classification or assignment of judicial opinions to ALR annotations and other legal research tools.
  • Larkey L.S. et al.: 'Combining classifiers in text categorization' 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Zurich, Switzerland, 18-22 Aug. 1996, vol. spec. issue., pages 289-297, XP002231517 SIGIR Forum, 1996, ACT, USA ISSN: 0163-5840, describes a prior art text categorisation system. The preambles to the independent claims are based on this document.
  • Summary of Exemplary Embodiments
  • To address this and other needs, the present inventors devised systems, methods, and software that facilitate classification of text or documents according to a target classification system For instance, one exemplary system aids in classifying headnotes to the ALR annotations; another aids in classifying headnotes to sections of American Jurisprudence (another encyclopedic style legal reference); and yet another aids in classifying headnotes to the West Key Number System. However, these and other embodiments are applicable to classification of other types of documents, such as emails.
  • According to a first aspect of the present invention, there is provided a computerized system for classifying input text to a target classification system having two or more target classes, the system composting;
    • means for determining for each of the target classes at least first and second scores based on the input text and the target class using respective first and second classification methods, and being characterised by comprising:
    • means for determining for each of the target classes a corresponding composite score based on the first score scaled by a first class-specific weight for the target class and the second score scaled by a second class-specific weight for the target class; and
    • means for determining for each of the target classes whether to classify or recommend classification of the input text to the target class based on the corresponding composite score and a class-specific decision threshold for the target class.
  • According to a second aspect of the present invention, there is provided a computer-implemented method of classifying input text to a target classification system having two or more target classes, the method comprising: for each target class:
    • determining first and second scores based on the input text and the target class using respective first and second classification methods, and being characterised by:
      for each target class:
      • determining a composite score based on the first score scaled by a first class specific weight for the target class and the second score scaled by a second class-specific weight for the target class; and
      • determining whether to identify the input text for classification to the target class based on the composite score and a class-specific decision threshold for the target class.
  • More particularly, some of the exemplary systems classify or aid manual classification of an input text by determining a set of composite scores, with each composite score corresponding to a respective target class in the target classification system. Determining each composite score preferably entails computing and applying class-specific weights to at least two of the following types of scores:
    • a first type based on similarity of the input text to text associated with a respective one of the target classes;
    • a second type based on similarity of a set of non-target classes associated with the input text and a set of non-target classes associated with a respective one of the target classes;
    • a third type based on probability of one of the target classes given a set of one or more non-target classes associated with the input text; and
    • a fourth type based on a probability of the input text given text associated with a respective one of the target classes.
  • These exemplary systems then evaluate the composite scores using class-specific decision criteria, such as thresholds, to ultimately assign or recommend assignment of the input text (or a document or other data structure associated with the input text) to one or more of the target classes.
  • Brief Description of Drawings
  • Figure 1
    is a diagram of an exemplary classification system 100 embodying teachings of the invention, including a unique graphical user interface 114;
    Figure 2
    is a flowchart illustrating an exemplary method embodied in classification system 100 of Figure 1;
    Figure 3
    is a diagram of an exemplary headnote 310 and a corresponding noun-word-pair model 320.
    Figure 4
    is a facsimile of an exemplary graphical user interface 400 that forms a portion of classification system 100.
    Figure 5
    is a diagram of another exemplary classification system 500, which is similar to system 100 but includes additional classifiers; and
    Figure 6
    is a diagram of another exemplary classification system 600, which is similar to system 100 but omits some classifiers.
    Detailed Description of Exemplary Embodiments
  • This description, which references and incorporates the above-identified Figures, describes one or more specific embodiments of one or more inventions. These embodiments, offered not to limit but only to exemplify and teach the one or more inventions, are shown and described in sufficient detail to enable those skilled in the art to implement or practice the invention. Thus, where appropriate to avoid obscuring the invention, the description may omit certain information known to those of skill in the art.
  • The description includes many terms with meanings derived from their usage in the art or from their use within the context of the description. However, as a further aid, the following exemplary definitions are presented.
  • The term "document" refers to any addressable collection or arrangement of machine-readable data.
  • The term "database" includes any logical collection or arrangement of documents.
  • The term "headnote" refers to an electronic textual summary or abstract concerning a point of law within a written judicial opinion. The number of headnotes associated with a judicial opinion (or case) depends on the number of issues it addresses.
  • Exemplary System for Classifying Headnotes to American Legal Reports
  • Figure 1 shows a diagram of an exemplary document classification system 100 for automatically classifying or recommending classifications of electronic documents according to a document classification scheme. The exemplary embodiment classifies or recommends classification of cases, case citations, or associated headnotes, to one or more of the categories represented by 13,779 ALR annotations. (The total number of annotation is growing at a rate on the order of 20-30 annotations per month.) However, the present invention is not limited to any particular type of documents or type of classification system.
  • Though the exemplary embodiment is presented as an interconnected ensemble of separate components, some other embodiments implement their functionality using a greater or lesser number of components. Moreover, some embodiments intercouple one or more the components through a local- or wide-area network. (Some embodiments implement one or more portions of system 100 using one or more mainframe computers or servers.) Thus, the present invention is not limited to any particular functional partition.
  • System 100 includes an ALR annotation database 110, a headnotes database 120, and a classification processor 130, a preliminary classification database 140, and editorial workstations 150.
  • ALR annotation database 110 (more generally a database of electronic documents classified according to a target classification scheme) includes a set of 13,779 annotations, which are presented generally by annotation 112. The exemplary embodiment regards each annotation as a class or category. Each annotation, such as annotation 112, includes a set of one or more case citations, such as citations 112.1 and 112.2.
  • Each citation identifies or is associated with at least one judicial opinion (or generally an electronic document), such as electronic judicial opinion (or case) 115. Judicial opinion 115 includes and/or is associated with one or more headnotes in headnote database 120, such as headnotes 122 and 124. (In the exemplary embodiment, a typical judicial opinion or case has about 6 associated headnotes, although cases having 50 or more are not rare.) A sample headnote and its assigned West Key Number class identifier are shown below.
  • Exemplary Headnote:
  • In an action brought under Administrative Procedure Act (APA), inquiry is twofold: court first examines the organic statute to determine whether Congress intended that an aggrieved party follow a particular administrative route before judicial relief would become available; if that generative statute is silent, court then asks whether an agency's regulations require recourse to a superior agency authority.
  • Exemplary Key Number class identifier:
  • 15AK229 - ADMINISTRATIVE LAW AND PROCEDURE - SEPARATION OF ADMINISTRATIVE AND OTHER POWERS - JUDICIAL POWERS
  • In database 120, each headnote is associated with one or more class identifiers, which are based, for example, on the West Key Number Classification System. (For further details on the West Key Number System, see West's Analysis of American Law: Guide to the American Digest System, 2000 Edition, West Group, 1999, which is incorporated herein by reference.) For example, headnote 122 is associated with classes or class identifiers 122.1, 122.2, and 122.3, and headnote 124 is associated with classes or class identifiers 124.1 and 124.2.
  • In the exemplary system, headnote database 120 includes about 20 million headnotes and grows at an approximate rate of 12,000 headnotes per week. About 89% of the headnotes are associated with a single class identifier, about 10% with two class identifiers, and about 1 % with more than two class identifiers.
  • Additionally, headnote database 120 includes a number of headnotes, such as headnotes 126 and 128, that are not yet assigned or associated with an ALR annotation in database 110. The headnotes, however, are associated with class identifiers. Specifically, headnote 126 is associated with class identifiers 126.1 and 126.2, and headnote 128 is associated with class identifier 128.1.
  • Coupled to both ALR annotation database 110 and headnote database 120 is classification processor 130. Classification processor 130 includes classifiers 131, 132, 133, and 134, a composite-score generator 135, an assignment decision-maker 136, and decision-criteria module 137. Processor 130 determines whether one or more cases associated with headnotes in headnote database 120 should be assigned to or cited within one or more of the annotations of annotation database 110. Processor 130 is also coupled to preliminary classification database 140.
  • Preliminary classification database 140 stores and/or organizes the assignment or citation recommendations. Within database 140, the recommendations can be organized as a single first-in-first-out (FIFO) queue, as multiple FIFO queues based on single annotations or subsets of annotations. The recommendations are ultimately distributed to work center 150.
  • Work center 150 communicates with preliminary classification database 140 as well as annotation database 110 and ultimately assists users in manually updating the ALR annotations in database 110 based on the recommendations stored in database 140. Specifically, work center 150 includes workstations 152, 154, and 156. Workstation 152, which is substantially identical to workstations 154 and 156, includes a graphical-user interface 152.1, and user-interface devices, such as a keyboard and mouse (not shown.)
  • In general, exemplary system 100 operates as follows. Headnotes database 120 receives a new set of headnotes (such as headnotes 126 and 128) for recently decided cases, and classification processor 130 determines whether one or more of the cases associated with the headnotes are sufficiently relevant to any of the annotations within ALR to justify recommending assignments of the headnotes (or associated cases) to one or more of the annotations. (Some other embodiments directly assign the headnotes or associated cases to the annotations.) The assignment recommendations are stored in preliminary classification database 140 and later retrieved by or presented to editors in work center 150 via graphical-user interfaces in workstations 152, 154, and 156 for acceptance or rejection. Accepted recommendations are added as citations to the respective annotations in ALR annotation database 110 and rejected recommendations are not. However, both accepted and rejected recommendations are fed back to classification processor 130 for incremental training or tuning of its decision criteria.
  • More particularly, Figure 2 shows a flow chart 200 illustrating in greater detail an exemplary method of operating system 100. Flow chart 200 includes a number of process blocks 210-250. Though arranged serially in the exemplary embodiment, other embodiments may reorder the blocks, omits one or more blocks, and/or execute two or more blocks in parallel using multiple processors or a single processor organized as two or more virtual machines or subprocessors. Moreover, still other embodiments implement the blocks as one or more specific interconnected hardware or integrated-circuit modules with related control and data signals communicated between and through the modules. Thus, the exemplary process flow is applicable to software, firmware, hardware, and hybrid implementations.
  • The remainder of the description uses the following notational system. The lower case letters a, h, and k respectively denote an annotation, a headnote, and a class or class identifier, such as a West Key Number class or class identifier. The upper case letters A, H, and K respectively denote the set of all annotations, the set of all headnotes, and the set of all key numbers classifications. Additionally, variables denoting vector quantities are in bold-faced capital letters, and elements of the corresponding vectors are denoted in lower case letters. For example, V denotes a vector, and v denotes an element of vector V.
  • At block 210, the exemplary method begins by representing the annotations in annotations database 110 (in Figure 1) as text-based feature vectors. In particular, this entails representing each annotation a as a one-column feature vector, V a , based on the noun and/or noun-word pairs occurring in headnotes for the cases cited within the annotation. (Other embodiments represent the headnotes as bigrams or noun phrases.)
  • Although it is possible to use all the headnotes associated with the cases cited in the annotation, the exemplary embodiment selects from the set of all headnotes associated with the cited cases those that are most relevant to the annotation being represented. For each annotation, this entails building a feature vector using all the headnotes in all cases cited in the annotation and selecting from each case one, two, or three headnotes based on similarity between the headnotes in a cited case and those of the citing annotation and denoting the most similar headnote(s) as relevant. To determine the most relevant headnotes, the exemplary embodiment uses classifiers 131-134 to compute similarity scores, averages the four scores for each headnote, and defines as most relevant the highest scoring headnote plus those with a score of at least 80% of the highest score. The 80% value was chosen empirically.
  • Once selected, the associated headnotes (or alternatively the actual text of the annotations) are represented as a set of nouns, noun-noun, noun-verb, and noun-adjective pairs that it contains. Words in a word-pair are not necessarily adjacent, but are within a specific number of words or characters of each other, that is, within a particular word or character window. The window size is adjustable and can take values from 1 to the total number of words or characters in the headnote. Although larger windows tend to yield better performance, in the exemplary embodiment, no change in performance was observed for windows larger than 32 non-stop words. For convenience, however, the exemplary window size is set to the actual headnote size. The exemplary embodiment excludes stop words and uses the root form of all words. Appendix A shows an exemplary list of exemplary stopwords; however, other embodiments use other lists of stopwords.
  • Figure 3 shows an example of a headnote 310 and a noun-word representation 320 in accord with the exemplary embodiment. Also shown are West Key Number classification text 330 and class identifier 340.
  • In a particular annotation vector V a , the weight, or magnitude, of any particular element va is defined as v a = t f a ʹ * id f a ʹ ,
    Figure imgb0001

    where t f a ʹ
    Figure imgb0002
    denotes the term frequency (that is, the total number of occurrences) of the term or noun-word pair associated with annotation a. (In the exemplary embodiment, this is the number of occurrences of the term within the set of headnotes associated with the annotation.) id f a ʹ
    Figure imgb0003
    denotes the inverse document frequency for the associated term or noun-word pair. id f a ʹ
    Figure imgb0004
    is defined as id f a ʹ = log N d f a ʹ ,
    Figure imgb0005

    where N is the total number of headnotes (for example, 20 million) in the collection, and d f a ʹ
    Figure imgb0006
    is the number of headnotes (or more generally documents) containing the term or noun-word pair. The prime ' notation indicates that these frequency parameters are based on proxy text, for example, the text of associated headnotes, as opposed to text of the annotation itself. (However, other embodiments may use all or portions of text from the annotation alone or in combination with proxy text, such as headnotes or other related documents:)
  • Even though the exemplary embodiment uses headnotes associated with an annotation as opposed to text of the annotation itself, the annotation-text vectors can include a large number of elements. Indeed, some annotation vectors can include hundreds of thousands of terms or noun-word pairs, with the majority of them having a low term frequency. Thus, not only to reduce the number of terms to a manageable number, but also to avoid the rare-word problem known to exist in vector-space models, the exemplary embodiment removes low-weight terms.
  • Specifically, the exemplary embodiment removes as many low-weight terms as necessary to achieve a lower absolute bound of 500 terms or a 75% reduction in the length of each annotation vector. The effect of this process on the number of terms in an annotation vector depends on their weight distribution. For example, if the terms have similar weights, approximately 75% of the terms will be removed. However, for annotations with skewed weight distributions, as few as 10% of the terms might be removed. In the exemplary embodiment, this process decreased the total number of unique terms for all annotation vectors from approximately 70 million to approximately 8 million terms.
  • Some other embodiments use other methods to limit vector size. For example, some embodiments apply a fixed threshold on the number of terms per category, or on the term's frequency, document frequency, or weight. These methods are generally efficient when the underlying categories do not vary significantly in the feature space. Still other embodiments perform feature selection based on measures, such as mutual information. These methods, however, are computationally expensive. The exemplary method attempts to strike a balance between these two ends.
  • Block 220, executed after representation of the annotations as text-based feature vectors, entails modeling one or more input headnotes from database 120 (in Figure 1) as a set of corresponding headnote-text vectors. The input headnotes include headnotes that have been recently added to headnote database 120 or that have otherwise not previously been reviewed for relevance to the ALR annotations in database 110.
  • The exemplary embodiment represents each input headnote h as a vector V h , with each element vh , like the elements of the annotation vectors, associated with a term or noun-word pair in the headnote. vh is defined as v h = t f h * id f H ,
    Figure imgb0007

    where tfh, denotes the frequency (that is, the total number of occurrences) of the associated term or noun-word pair in the input headnote, and idfH denotes the inverse document frequency of the associated term or noun-word pair within all the headnotes.
  • At block 230, the exemplary method continues with operation of classification processor 130 (in Figure 1). Figure 2 shows that block 230 itself comprises sub-process blocks 231-237.
  • Block 231, which represents operation of classifier 131, entails computing a set of similarity scores based on the similarity of text in each input headnote text to the text associated with each annotation. Specifically, the exemplary embodiment measures this similarity as the cosine of the angle between the headnote vector V h and each annotation vector V a . Mathematically, this is expressed as S 1 = cos θ ah = V a ʹ V h ʹ V a × V h ,
    Figure imgb0008

    where "·" denotes the conventional dot- or inner-product operator, and V a ' and V h ' denote that respective vectors V a and V h have been modified to include elements corresponding to terms or noun-word pairs found in both the annotation text and the headnote. In other words, the dot product is computed based on the intersection of the terms or noun-word pairs. ∥X∥ denotes the length of the vector argument. In this embodiment, the magnitudes are computed based on all the elements of the vector.
  • Block 232, which represents operation of classifier 132, entails determining a set of similarity scores based on the similarity of the class identifiers (or other meta-data) associated with the input headnote and those associated with each of the annotations. Before this determination is made, each annotation a is represented as an annotation-class vector V a C
    Figure imgb0009
    vector, with each element v a C
    Figure imgb0010
    indicating the weight of a class identifier assigned to the headnotes cited by the annotation. Each element v a C
    Figure imgb0011
    is defined as v a C
    Figure imgb0012

    where t f a C
    Figure imgb0013
    denotes the frequency of the associated class identifier, and id f a C ,
    Figure imgb0014
    denotes its inverse document frequency. id f a C
    Figure imgb0015
    is defined as id f a C = log N C d f C ,
    Figure imgb0016

    where Nc is the total number of classes or class identifiers. In the exemplary embodiment, Nc is 91997, the total number of classes in the West Key Number System. dfc is the frequency of the class identifier amongst the set of class identifiers for annotation a. Unlike the exemplary annotation-text vectors which are based on a selected set of annotation headnotes, the annotation-class vectors use all the class identifiers associated with all the headnotes that are associated with the annotation. Some embodiments may use class-identifier pairs, although they were found to be counterproductive in the exemplary implementation.
  • Similarly, each input headnote is also represented as a headnote-class vector V h C ,
    Figure imgb0017
    with each element indicating the weight of a class or class identifier assigned to the headnote. Each element v h C
    Figure imgb0018
    is defined as v h C = t f h C * id f h C ,
    Figure imgb0019
    with t f h C
    Figure imgb0020
    denoting the frequency of the class identifier, and id f h C
    Figure imgb0021
    denoting the inverse document frequency of the class identifier. id f h C
    Figure imgb0022
    is defined as id f h C = log N C d f a C ,
    Figure imgb0023

    where Nc is the total number of classes or class identifiers and dfh is the frequency of the class or class identifier amongst the set of class or class identifiers associated with the annotation.
  • Once the annotation-class and headnote-class vectors are established, classification processor 130 computes each similarity score S2 as the cosine of the angle between them. This is expressed as S 2 = cos θ ah = V a C V h C V a C × V h C ,
    Figure imgb0024
    For headnotes that have more than one associated class identifier, the exemplary embodiment considers each class identifier separately of the others for that headnote, ultimately using the one yielding the maximum class-identifier similarity. The maximization criteria is used because, in some instances, a headnote may have two or more associated class identifiers (or Key Number classifications), indicating its discussion of two or more legal points. However, in most cases, only one of the class identifiers is relevant to a given annotation.
  • In block 233, classifier 133 determines a set of similarity scores S3 based on the probability that a headnote is associated with a given annotation from class-identifier (or other meta-data) statistics. This probability is approximated by S 3 = P h | a = P k h | a = max h h P | a ,
    Figure imgb0025

    where {k} h denotes the set of class identifiers assigned to headnote h . Each annotation conditional class probability P(k/a) is estimated by P k | a = 1 + t f k a | a | + a t f k a ,
    Figure imgb0026

    where tf (k,a) is the term frequency of the k-th class identifier among the class identifiers associated with the headnotes of annotation a; |a| denotes the total number of unique class identifiers associated with annotation a (that is, the number of samples or cardinality of the set); and a t f a
    Figure imgb0027
    denotes the sum of the term frequencies for all the class identifiers.
  • The exemplary determination of similarity scores S3 relies on assumptions that class identifiers are assigned to a headnote independently of each other, and that only one class identifier in {k}h is actually relevant to annotation a. Although the one-class assumption does not hold for many annotations, it improves the overall performance of the system.
  • Alternatively, one can multiply the conditional class-identifier (Key Number classifications) probabilities for the annotation, but this effectively penalizes headnotes with multiple Key Number classifications (class assignments), compared to those with single Key Number classifications. Some other embodiments use Bayes' rule to incorporate a priori probabilities into classifier 133. However, some experimentation with this approach suggests that system performance is likely to be inferior to that provided in this exemplary implementation.
  • The inferiority may stem from the fact that annotations are created at different times, and the fact that one annotation has more citations than another does not necessarily mean it is more probable to occur for a given headnote. Indeed, a greater number of citations might only reflect that one annotation has been in existence longer and/or updated more often than another. Thus, other embodiments might use the prior probabilities based on the frequency that class numbers are assigned to the annotations.
  • In block 234, classifier 134 determines a set of similarity scores S4, based on P(a|h), the probability of each annotation given the text of the input headnote. In deriving a practical expression for computing P(a|h), the exemplary embodiment first assumes that an input headnote h is completely represented by a set of descriptors T, with each descriptor t assigned to a headnote with some probability, P(t|h). Then, based on the theory of total probability and Bayes' theorem, P(a|h) is expressed as P a | h = t T P a | h , t P t | h = t T P h | a , t P a | t P h | t P t | h
    Figure imgb0028
    Assuming that a descriptor is independent of the class identifiers associated with a headnote allows one to make the approximation: P h | a , t P h | t
    Figure imgb0029
    and to compute the similarity scores S4 according to S 4 = P a | h = t T P t | h P a | t
    Figure imgb0030
    where P(t|h) is approximated by P t | h = t f t h T t f h .
    Figure imgb0031
    tf (t,h) denotes the frequency of term t in the headnote and T t f h
    Figure imgb0032
    denotes the sum of the frequencies of all terms in the headnote. P(a|t)is defined according to Bayes' theorem as P a | t = P t | a P a A P t | P ,
    Figure imgb0033
    where P(a) denotes the prior probability for annotation a, and P(t|a), the probability of a discriminator t given annotation a, is estimated as P t | a 1 a h a P t | h ,
    Figure imgb0034
    and A
    Figure imgb0035
    denotes summation over all annotations a' in the set of annotations A. Since all the annotation prior probabilities P(a) and P(a') are assumed to be equal, P(a|t) is computed using P a | t = P t | a A P t | .
    Figure imgb0036
  • Block 235, which represents operation of composite-score generator 135, entails computing a set of composite similarity scores C S a h
    Figure imgb0037
    based on the sets of similarity scores determined at blocks 231-235 by classifiers 131-135, with each composite score indicating the similarity of the input headnote h to each annotation a. More particularly, generator 135 computes each composite score C S a h
    Figure imgb0038
    according to C S a h = i = 1 4 w ia S a , i h
    Figure imgb0039
    where S a , i h
    Figure imgb0040
    denotes the similarity score of the i-th similarity score generator for the input headnote h and annotation a, and wia is a weight assigned to the i -th similarity score generator and annotation a. Execution of the exemplary method then continues at block 236.
  • At block 236, assignment decision-maker 136 recommends that the input headnote or a document, such as a case, associated with the headnote be classified or incorporated into one or more of the annotations based on the set of composite scores and decision criteria within decision-criteria module 137. In the exemplary embodiments, the headnote is assigned to annotations according to the following decision rule: If C S a h > Γ a , then recommend assignment of h or D h to annotation a ,
    Figure imgb0041
    where Γ a is an annotation-specific threshold from decision-criteria module 137 and Dh denotes a document, such as a legal opinion, associated with the headnote. (In the exemplary embodiment, each ALR annotation includes the text of associated headnotes and its full case citation.)
  • The annotation-classifier weights wia , for i = 1 to 4, aA, and the annotation thresholds Γ a , aA, are learned during a tuning phase. The weights, 0 ≤ wia ≤ 1, reflect system confidence in the ability of each similarity score to route to annotation a. Similarly, the annotation thresholds Γ a , aA, are also learned and reflect the homogeneity of an annotation. In general, annotations dealing with narrow topics tend to have higher thresholds than those dealing with multiple related topics.
  • In this ALR embodiment, the thresholds reflect that, over 90% of the headnotes (or associated documents) are not assigned to any annotations. Specifically, the exemplary embodiment estimates optimal annotation-classifier weights and annotation thresholds through exhaustive search over a five-dimensional space. The space is discretized to make the search manageable. The optimal weights are those corresponding to maximum precision at recall levels of at least 90%.
  • More precisely, this entails trying every combination of four weight variables, and for each combination, trying 20 possible threshold values over the interval [0,1]. The combination of weights and threshold that yields the best precision and recall is then selected. The exemplary embodiment excludes any weight-threshold combinations resulting in less than 90% recall.
  • To achieve higher precision levels, the exemplary embodiment effectively requires assignments to compete for their assigned annotations or target classifications. This competition entails use of the following rule: Assign h to a , iff C S a h > α S ^
    Figure imgb0042
    where a denotes an empirically determined value greater than zero and less than 1, for example, 0.8 ; denotes the maximum composite similarity score associated with a headnote in {Ha }, the set of headnotes assigned to annotation a.
  • Block 240 entails processing classification recommendations from classification processor 130. To this end, processor 130 transfers classification recommendations to preliminary classification database 140 (shown in Figure 1). Database 140 sorts the recommendation based on annotation, jurisdiction, or other relevant criteria and stores them in, for example, a single first-in-first-out (FIFO) queue, as multiple FIFO queue based on single annotations or subsets of annotations.
  • One or more of the recommendations are then communicated by request or automatically to workcenter 150, specifically workstations 152, 154, and 156. Each of the workstations displays, automatically or in response to user activation, one or more graphical-user interfaces, such as graphical-user interface 152.1.
  • Figure 4 shows an exemplary form of graphical-user interface 152.1. Interface 152.1 includes concurrently displayed windows or regions 410, 420, 430 and buttons 440-490.
  • Window 410 displays a recommendation list 412 of headnote identifiers from preliminary classification database 140. Each headnote identifier is logically associated with at least one annotation identifier (shown in window 430). Each of the listed headnote identifiers is selectable using a selection device, such as a keyboard or mouse or microphone. A headnote identifier 412.1 in list 412 is automatically highlighted, by for example, reverse-video presentation, upon selection. In response, window 420 displays a headnote 422 and a case citation 424, both of which are associated with each other and the highlighted headnote identifier 412.1. In further response, window 430 displays at least a portion or section of an annotation outline 432 (or classification hierarchy), associated with the annotation designated by the annotation identifier associated with headnote 412.1.
  • Button 440, labeled "New Section," allows a user to create a new section or subsection in the annotation outline. This feature is useful, since in some instances, a headnote suggestion is good, but does not fit an existing section of the annotation. Creating the new section or subsection thus allows for convenient expansion of the annotation..
  • Button 450 toggles on and off the display of a text box describing headnote assignments made to the current annotation during the current session. In the exemplary embodiment, the text box presents each assignment in a short textual form, such as <annotation or class identifier><subsection or section identifier ><headnote identifier>. This feature is particularly convenient for larger annotation outlines that exceed the size of window 430 and require scrolling contents of the window.
  • Button 460, labeled "Un-Allocate," allows a user to de-assign, or declassify, a headnote to a particular annotation. Thus, if a user changes her mind regarding a previous, unsaved, classification, the user can nullify the classification. In some embodiments, headnotes identified in window 410 are understood to be assigned to the particular annotation section displayed in window 430 unless the user decides that the assignment is incorrect or inappropriate. (In some embodiments, acceptance of a recommendation entails automatic creation of hyperlinks linking the annotation to the case and the case to the annotation.)
  • Button 470, labeled "Next Annotation," allows a user to cause display of the set of headnotes recommended for assignment to the next annotation. Specifically, this entails not only retrieving headnotes from preliminary classification storage 140 and displaying them in window 410, but also displaying the relevant annotation outline within window 430.
  • Button 480, labeled "Skip Anno," allows a user to skip the current annotation and its suggestions altogether and advance to the next set of suggestions and associated annotation. This feature is particularly useful when an editor wants another editor to review assignments to a particular annotation, or if the editor wants to review this annotation at another time, for example, after reading or studying the entire annotation text, for example. The suggestions remain in preliminary classification database 140 until they are either reviewed or removed. (In some embodiments, the suggestions are time-stamped and may be supplanted with more current suggestions or deleted automatically after a preset period of time, with the time period, in some variations dependent on the particular annotation.)
  • Button 490, labeled "Exit," allows an editor to terminate an editorial session. Upon termination, acceptances and recommendations are stored in ALR annotations database 110.
  • Figure 2 shows that after processing of the preliminary classifications, execution of the exemplary method continues at block 250. Block 250 entails updating of classification decision criteria. In the exemplary embodiment, this entails counting the numbers of accepted and rejected classification recommendations for each annotation, and adjusting the annotation-specific decision thresholds and/or classifier weights appropriately. For example, if 80% of the classification recommendations for a given annotation are rejected during one day, week, month, quarter or year, the exemplary embodiment may increase the decision threshold associated with that annotation to reduce the number of recommendations. Conversely, if 80% are accepted, the threshold may be lowered to ensure that a sufficient number of recommendations are being considered.
  • Exemplary System for Classifying Headnotes to American Jurisprudence
  • Figure 5 shows a variation of system 100 in the form of an exemplary classification system 500 tailored to facilitate classification of documents to one or more of the 135,000 sections of The American Jurisprudence (AmJur). Similar to an ALR annotation, each AmJur section cites relevant cases as they are decided by the courts. Likewise, updating AmJur is time consuming.
  • In comparison to system 100, classification system 500 includes six classifiers: classifiers 131-134 and classifiers 510 and 520, a composite score generator 530, and assignment decision-maker 540. Classifiers 131-134 are identical to the ones used in system 100, with the exception that they operate on AmJur data as opposed to ALR data.
  • Classifiers 510 and 520 process AmJur section text itself, instead of proxy text based on headnotes cited within the AmJur section.. More specifically, classifier 510 operates using the formulae underlying classifier 131 to generate similarity measurements based on the tf-idfs (term-frequency-inverse document frequency) of noun-word pairs in AmJur section text. And classifier 520 operates using the formulae underlying classifier 134 to generate similarity measurements based on the probabilities of a section text given the input headnote.
  • Once the measurements are computed, each classifier assigns each AmJur section a similarity score based on a numerical ranking of its respective set of similarity measurements. Thus, for any input headnote, each of the six classifiers effectively ranks the 135,000 AmJur sections according to their similarities to the headnote. Given the differences in the classifiers and the data underlying their scores, it is unlikely that all six classifiers would rank the most relevant AmJur section the highest; differences in the classifiers and the data they use generally suggest that this will not occur. Table 1 shows a partial ranked listing of AmJur sections showing how each classifier scored, or ranked, their similarity to a given headnote. Table 1: Partial Ranked Listing AmJur Sections based of Median of Six Similarity Scores
    Section C 1 Ranks C 2 Ranks C 3 Ranks C 4 Ranks C 5 Ranks C 6 Ranks Median Ranks
    Section_1 1 8 4 1 3 2 2.5
    Section_2 3 2 5 9 1 3 3
    Section_3 2 4 6 5 4 4 4
    Section_4 5 1 3 8 6 1 4
    Section_5 7 3 2 2 5 5 4
    Section_6 4 5 1 7 2 9 4.5
    Section_7 8 7 8 4 7 6 7
    Section_8 6 9 7 3 10 7 7
    Section_9 9 10 9 6 9 10 9
    Section_10 10 6 10 10 8 8 9
  • Composite score generator 530 generates a composite similarity score for each AmJur section based on its corresponding set of six similarity scores. In the exemplary embodiment, this entails computing the median of the six scores for each AmJur section. However, other embodiments can compute a uniform or non-uniformly weighted average of all six or a subset of the six ranking. Still other embodiments can select the maximum, minimum, or mode as the composite score for the AmJur section. After generating the composite scores, the composite score generator forwards data identifying the AmJur section associated with the highest composite score, the highest composite score, and the input headnote to assignment decision-maker 540.
  • Assignment decision-maker 540 provides a fixed portion of headnote-classification recommendations to preliminary classification database 140, based on the total number of input headnotes per a fixed time period. The fixed number and time period governing the number of recommendations are determined according to parameters within decision-criteria module 137. For example, one embodiment ranks all incoming headnotes for the time period, based on their composite scores and recommends only those headnotes that rank in the top 16 percent.
  • In some instances, more than one headnote may have a composite score that equals a given cut-off threshold, such as top 16%. To ensure greater accuracy in these circumstances, the exemplary embodiment re-orders all headnote-section pairs that coincide with the cut-off threshold, using the six actual classifier scores.
  • This entails converting the six classifier scores for a particular headnote-section pair into six Z-scores and then multiplying the six Z-scores for a particular headnote-section pair to produce a single similarity measure. (Z-scores are obtained by assuming that each classifier score has a normal distribution, estimating the mean and standard deviation of the distribution, and then subtracting the mean from the classifier score and dividing the result by the standard deviation.) The headnote-section pairs that meet the acceptance criteria are than re-ordered, or re-ranked, according to this new similarity measure, with as many as needed to achieve the desired number of total recommendations being forwarded to preliminary classification database 140. (Other embodiments may apply this "reordering" to all of the headnote-section pairs and then filter these based on the acceptance criteria necessary to obtain the desired number of recommendations.)
  • Exemplary System for Classifying Headnotes to West Key Number System
  • Figure 6 shows another variation of system 100 in the form of an exemplary classification system 600 tailored to facilitate classification of input headnotes to classes of the West Key Number System. The Key Number System is a hierarchical classification system with 450 top-level classes, which are further subdivided into 92,000 sub-classes, each having a unique class identifier. In comparison to system 100, system 600 includes classifiers 131 and 134, a composite score generator 610, and an assignment decision-maker 620.
  • In accord with previous embodiments, classifiers 131 and 134 model each input headnote as a feature vector of noun-word pairs and each class identifier as a feature vector of noun-word pairs extracted from headnotes assigned to it. Classifier 131 generates similarity scores based on the tf-idf products for noun-word pairs in headnotes assigned to each class identifier and to a given input headnote. And classifier 134 generates similarity scores based on the probabilities of a class identifier given the input headnote. Thus, system 600 generates over 184,000 similarity scores, with each scores representing the similarity of the input headnote to a respective one of the over 92,000 class identifiers in the West Key Number System using a respective one of the two classifiers.
  • Composite score generator 610 combines the two similarity measures for each possible headnote-class-identifier pair to generate a respective composite similarity score. In the exemplary embodiment, this entails defining, for each class or class identifier, two normalized cumulative histograms (one for each classifier) based on the headnotes already assigned to the class. These histograms approximate corresponding cumulative density functions, allowing one to determine the probability that a given percentage of the class identifiers scored below a certain similarity score.
  • More particularly, the two cumulative normalized histograms for class-identifier c, based on classifiers 131 and 134 are respectively denoted F C 1
    Figure imgb0043
    and F C 2 ,
    Figure imgb0044
    and estimated according to: F C 1 s = F C 1 s - 0.01 + 1 M C * h i | S i 1 = s
    Figure imgb0045
    and F C 2 s = F C 2 s - 0.01 + 1 M C * h i | S i 2 = s ,
    Figure imgb0046

    where c denotes a particular class or class identifier;
    s = 0, 0.01, 0.02,0.03, ···,1.0; F(s < 0) = 0; Mc denotes the number of headnotes classified to or associated with class or class identifier c; |{B}| denotes the number of elements in the set B hi, i =1, ... , Mc denotes the set of headnotes already classified or associated with class or class identifier c; S i 1
    Figure imgb0047
    denotes the similarity score for headnote hi and class-identifier c, as measured by classifier 131, and S i 2
    Figure imgb0048
    denote the similarity score for headnote hi and class-identifier c, as measured by classifier 134. (In this context, each similarity score indicates the similarity of a given assigned headnote to all the headnotes assigned to class c.) In other words, h i | S i 1 = s
    Figure imgb0049
    denotes the number of headnotes assigned to class c that received a score of s from classifier 131, and h i | S i 2 = s
    Figure imgb0050
    denotes the number of headnotes assigned to class c that received a score of s from classifier 134.
  • Thus, for every possible score value (between 0 and 1 with a particular score spacing), each histogram provides the percentage of assigned headnotes that scored higher and lower than that particular score. For example, for classifier 131, the histogram for class identifier c might show that 60% of the set of headnotes assigned to classifier c scored higher than 0.7 when compared to the set of headnotes as a whole; whereas for classifier 134 the histogram might show that 50% of the assigned headnotes scored higher than 0.7
  • Next, composite score generator 610 converts each score for the input headnote into a normalized similarity score using the corresponding histogram and computes each composite score for each class based on the normalized scores. In the exemplary embodiment, this conversion entails mapping each classifier score to the corresponding histogram to determine its cumulative probability and then multiplying the cumulative probabilities of respective pairs of scores associated with a given class c to compute the respective composite similarity score. The set of composite scores for the input headnote are then processed by assignment decisionmaker 620.
  • Assignment decision maker 620 forwards a fixed number of the top scoring class identifiers to preliminary classification database 140. The exemplary embodiments suggest the class identifiers having the top five composite similarity scores for every input headnote.
  • Other Exemplary Applications
  • The components of the various exemplary systems presented can be combined in myriad ways to form other classification systems of both greater and lesser complexity. Additionally, the components and systems can be tailored for other types of documents other than headnotes. Indeed, the components and systems and embodied teachings and principles of operation are relevant to virtually any text or data classification context.
  • For example, one can apply one or more of the exemplary systems and related variations to classify electronic voice and mail messages. Some mail classifying systems may include one or more classifiers in combination with conventional rules which classify messages as useful or SPAM based on whether the sender is in your address book, same domain as recipient, etc.
  • Appendix A Exemplary Stop Words
  • a a.m ab about above accordingly across ad after afterward afterwards again against ago ah ahead ain't all allows almost alone along already alright also although always am among amongst an and and/or anew another ante any anybody anybody's anyhow anymore anyone anyone's anything anything's anytime anytime's anyway anyways anywhere anywhere's anywise appear approx are aren't around as aside associated at available away awfully awhile b banc be became because become becomes becoming been before beforehand behalf behind being below beside besides best better between beyond both brief but by by the c came can can't cannot cant cause causes certain certainly cetera cf ch change changes cit c1 clearly cmt co concerning consequently consider contain containing contains contra corresponding could couldn't course curiam currently d day days dba de des described di did didn't different divers do does doesn't doing don't done down downward downwards dr du during e e.g each ed eds eg eight eighteen eighty either eleven else elsewhere enough especially et etc even ever evermore every everybody everybody's everyone everyone's everyplace everything everything's everywhere everywhere's example except f facie facto far few fewer fide fides followed following follows for forma former formerly forth forthwith fortiori fro from further furthermore g get gets getting given gives go goes going gone got gotten h had hadn't happens hardly has hasn't have haven't having he he'd he'll he's hello hence henceforth her here here's hereabout hereabouts hereafter herebefore hereby herein hereinafter hereinbefore hereinbelow hereof hereto heretofore hereunder hereunto hereupon herewith hers herself hey hi him himself his hither hitherto hoc hon how howbeit however howsoever hundred i i'd i'll i'm i've i.e ibid ibidem id ie if ignored ii iii illus immediate in inasmuch inc indeed indicate indicated indicates infra initio insofar instead int he into intra inward ipsa is isn't it it's its itself iv ix j jr judicata just k keep kept kinda know known knows 11a last later latter latterly le least les less lest let let's like likewise little looks ltd m ma'am many may maybe me meantime meanwhile mero might million more moreover most mostly motu mr mrs ms much must my myself name namely naught near necessary neither never nevermore nevertheless new next no no-one nobody nohow nolo nom non none nonetheless noone nor normally nos not nothing novo now nowhere o o'clock of ofa off ofhis oft often ofthe ofthis oh on once one one's ones oneself only on the onto op or other others otherwise ought our ours ourself ourselves out outside over overall overly own p p.m p.s par para paras pars particular particularly passim per peradventure percent perchance perforce perhaps pg pgs placed please plus possible pp probably provides q quite r rata rather really rel relatively rem res resp respectively right s sa said same says se sec seem seemed seeming seems seen sent serious several shall shalt she she'll she's should shouldn't since sir so some somebody somebody's somehow someone someone's something something's sometime sometimes somewhat somewhere somewhere's specified specify specifying still such sundry sup t take taken tam than that that's thats the their theirs them themselves then thence thenceforth thenceforward there there's thereafter thereby therefor therefore therefrom therein thereof thereon theres thereto theretofore thereunto thereupon therewith these they they'll thing things third this thither thorough thoroughly those though three through throughout thru thus to to-wit together too toward towards u uh unless until up upon upward upwards used useful using usually v v.s value various very vi via vii viii virtually vs w was wasn't way we we'd we'll we're we've well went were weren't what what'll what's whatever whatsoever when whence whenever where whereafter whereas whereat whereby wherefore wherefrom wherein whereinto whereof whereon wheresoever whereto whereunder whereunto whereupon wherever wherewith whether which whichever while whither who who'd who'll who's whoever whole wholly wholy whom whose why will with within without won't would wouldn't x y y'all ya'll ye yeah yes yet you you'll you're you've your yours yourself yourselves z
  • Conclusion
  • In furtherance of the art, the inventors have presented various exemplary systems, methods, and software which facilitate the classification of text, such as headnotes or associated legal cases to a classification system, such as that represented by the nearly 14,000 ALR annotations. The exemplary system classifiers or makes classification recommendations based on text and class similarities and probabilistic relations. The system also provides a graphical-user interface to facilitate editorial processing of recommended classifications and thus automated update of document collections, such as the American Legal Reports, American Jurisprudence, and countless others.
  • The embodiments described above are intended only to illustrate and teach one or more ways of practicing or implementing the present invention, not to restrict its breadth or scope. The actual scope of the invention, which embraces all ways of practicing or implementing the teachings of the invention, is defined only by the following claims.

Claims (23)

  1. A computerized system (100) for classifying input text (126, 128) to a target classification system having two or more target classes (122.1, 124.1, 126.1, 128.1), the system comprising:
    • means (131, 132, 133, 134) for determining for each of the target classes at least first and second scores based on the input text and the target class using respective first and second classification methods; and being characterised by comprising:
    • means (135) for determining for each of the target classes a corresponding composite score based on the first score scaled by a first class-specific weight for the target class and the second score scaled by a second class-specific weight for the target class; and
    • means (136, 137) for determining for each of the target classes whether to classify or recommend classification of the input text to the target class based on the corresponding composite score and a class-specific decision threshold for the target class.
  2. A computer-implemented method of classifying input text (126, 128) to a target classification system having two or more target classes (122.1, 124.1, 126.1, 128.1), the method comprising:
    for each target class:
    • determining first and second scores based on the input text and the target class using respective first and second classification methods; and being characterised by:
    for each target class:
    • determining a composite score based on the first score scaled by a first class specific weight for the target class and the second score scaled by a second class-specific weight for the target class; and
    • determining whether to identify the input text for classification to the target class based on the composite score and a class-specific decision threshold for the target class.
  3. The method of claim 2:
    • wherein determining the first and second scores for each target class comprises:
    ο determining the first score based on similarity of at least one or more portions of the input text to text associated with the target class; and
    ο determining the second score based on similarity of a set of one or more non-target classes associated with the input text and a set of one or more non-target classes associated with the target class;
    • wherein the method further comprises determining for each target class:
    o a third score based on probability of the target class given a set of one or more non-target classes associated with the input text; and
    ο a fourth score based on probability of the target class given at least a portion of the input text; and
    • wherein the composite score is further based on the third score scaled by a third class-specific weight for the target class and the fourth score scaled by a fourth class-specific weight for the target class.
  4. The method of claim 2:
    • wherein the input text is associated with first meta-data and each target class is associated with second meta-data; and
    • wherein at least one of the first and second scores is based on the first meta-data and the second meta-data.
  5. The method of claim 4, wherein the first meta-data comprises a first set of non-target classes that are associated with the input text and the second meta-data comprises a second set of non-target classes that are associated with the target class.
  6. The method of claim 2, comprising:
    for each target class (122.1, 124.1, 126-1, 128.1):
    • providing at least first and second class-specific weights and a class-specific decision threshold; and
    • using at least first and second classification methods to determine respective first and second scores based on the input text and the target class.
  7. The method of claim 2 or 6, wherein at least one of the first and second scores is based on a set of one or more noun-words pairs associated with the input text and a set of one or more noun-word pairs associated with the target class, with at least one noun-word pair in each set including a noun and a non-adjacent word.
  8. The method of claim 6, wherein providing each first and second class-specific weight and class-specific decision threshold comprises searching for a combination of first and second class-specific weights and class-specific decision threshold that yield a predetermined level of precision at a predetermined level of recall based on text classified to the target classification system.
  9. The method of claim 6, wherein a non-target classification system includes two or more non-target classes, and at least one of the first and second scores is based on one or more of the non-target classes that are associated with the input text and one or more of the non-target classes that are associated with the target class.
  10. The method of claim 9:
    • wherein the input text is a headnote (126, 128) for a legal document; and
    • wherein the target classification system and the non-target classification system are legal classification systems.
  11. The method of claim 6, wherein the target classification system includes more than 1000 target classes.
  12. The method of claim 6, further comprising:
    • displaying a graphical user interface (152.1) including first and second regions (410, 420), with the first region displaying or identifying at least a portion of the input text and the second region displaying information regarding the target classification system and at least one target class for which the input text was recommended for classification; and
    • displaying a selectable feature (412) on the graphical user interface, wherein selecting the feature initiates classification of the input text to the one target class.
  13. A machine-readable medium comprising instructions for implementing the method of claim 2 or 6.
  14. The method of claim 2, wherein the first and second scores are selected from the group consisting of:
    • a score based on similarity of at least one or more portions of the input text to text associated with the target class;
    • a score based on similarity of a set of one or more non-target classes associated with the input text and a set of one or more non-target classes associated with the target class;
    • a score based on probability of the target class given a set of one or more non-target classes associated with the input text; and
    • a score based on probability of the target class given at least a portion of the input text.
  15. The method of claim 14, wherein each target class (122.1, 124.1, 126.1, 128.1) is a document and the text associated with the target class comprises text of the document or text of another document associated with the target class.
  16. The method of claim 2, further comprising:
    updating the class-specific threshold for one of the target classes based on acceptance or rejection of recommended classifications of the input text.
  17. The method of claim 2, further comprising:
    • identifying one or more noun-word pairs in a portion of text
  18. The method of claim 17, wherein identifying one or more noun-word pairs in the portion of text, comprises:
    • identifying a first noun in the portion of text; and
    • identifying one or more words within a predetermined numbers of words of the first noun.
  19. The method of claim 18, wherein identifying one or more words within a predetermined number of words of the first noun comprises excluding a set of one or more stop words.
  20. The method of claim 17, wherein the portion of text is a paragraph.
  21. The method of claim 17, further comprising:
    determining one or more scores based on frequencies of one or more of the identified noun-word pairs in the portion of text and one or more noun-word pairs in text associated with one of the target classes.
  22. The method of claim 21, wherein determining one or more scores based on one or more identified noun-word pairs and one or more noun-word pairs in other text associated with one of the target classes, comprises:
    • determining a respective weight for each identified noun-word pair, with the respective weight based on a product of a term frequency of the identified word-noun pair in the text and an inverse document frequency of the noun-word pairs in the other text associated with one of the target classes.
  23. The method of claim 2, further comprising:
    • identifying a first set of noun-word pairs in the input text, with the first set including at least one noun-word pair formed from a noun and non-adjacent word in the input text;
    • identifying two or more second sets of noun-word pairs, with each second set including at least one noun-word pair formed form a noun and non-adjacent word in text associated with a respective one of the target classes;
    • determining a set of scores based on the first and second sets of noun-word pairs; and
    • classifying or recommending classification of the input text to one or more of the target classes based on the set of scores.
EP02786640A 2001-11-02 2002-11-01 Systems, methods, and software for classifying documents Expired - Lifetime EP1464013B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP08017291A EP2012240A1 (en) 2001-11-02 2002-11-01 Systems, methods, and software for classifying documents

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US33686201P 2001-11-02 2001-11-02
US336862P 2001-11-02
PCT/US2002/035177 WO2003040875A2 (en) 2001-11-02 2002-11-01 Systems, methods, and software for classifying documents

Related Child Applications (1)

Application Number Title Priority Date Filing Date
EP08017291A Division EP2012240A1 (en) 2001-11-02 2002-11-01 Systems, methods, and software for classifying documents

Publications (2)

Publication Number Publication Date
EP1464013A2 EP1464013A2 (en) 2004-10-06
EP1464013B1 true EP1464013B1 (en) 2009-01-21

Family

ID=23317997

Family Applications (2)

Application Number Title Priority Date Filing Date
EP02786640A Expired - Lifetime EP1464013B1 (en) 2001-11-02 2002-11-01 Systems, methods, and software for classifying documents
EP08017291A Ceased EP2012240A1 (en) 2001-11-02 2002-11-01 Systems, methods, and software for classifying documents

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP08017291A Ceased EP2012240A1 (en) 2001-11-02 2002-11-01 Systems, methods, and software for classifying documents

Country Status (12)

Country Link
US (3) US7062498B2 (en)
EP (2) EP1464013B1 (en)
JP (3) JP4342944B2 (en)
CN (1) CN1701324B (en)
AT (1) ATE421730T1 (en)
AU (2) AU2002350112B8 (en)
CA (2) CA2470299C (en)
DE (1) DE60231005D1 (en)
DK (1) DK1464013T3 (en)
ES (1) ES2321075T3 (en)
NZ (1) NZ533105A (en)
WO (1) WO2003040875A2 (en)

Families Citing this family (103)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6154757A (en) * 1997-01-29 2000-11-28 Krause; Philip R. Electronic text reading environment enhancement method and apparatus
WO2002082224A2 (en) * 2001-04-04 2002-10-17 West Publishing Company System, method, and software for identifying historically related legal opinions
US7062498B2 (en) * 2001-11-02 2006-06-13 Thomson Legal Regulatory Global Ag Systems, methods, and software for classifying text from judicial opinions and other documents
US7139755B2 (en) 2001-11-06 2006-11-21 Thomson Scientific Inc. Method and apparatus for providing comprehensive search results in response to user queries entered over a computer network
US7356461B1 (en) * 2002-01-14 2008-04-08 Nstein Technologies Inc. Text categorization method and apparatus
US7188107B2 (en) * 2002-03-06 2007-03-06 Infoglide Software Corporation System and method for classification of documents
US8201085B2 (en) * 2007-06-21 2012-06-12 Thomson Reuters Global Resources Method and system for validating references
JP2006512693A (en) * 2002-12-30 2006-04-13 トムソン コーポレイション A knowledge management system for law firms.
US20040133574A1 (en) 2003-01-07 2004-07-08 Science Applications International Corporaton Vector space method for secure information sharing
US7089241B1 (en) * 2003-01-24 2006-08-08 America Online, Inc. Classifier tuning based on data similarities
US7725544B2 (en) 2003-01-24 2010-05-25 Aol Inc. Group based spam classification
US20040193596A1 (en) * 2003-02-21 2004-09-30 Rudy Defelice Multiparameter indexing and searching for documents
US7590695B2 (en) 2003-05-09 2009-09-15 Aol Llc Managing electronic messages
US7218783B2 (en) * 2003-06-13 2007-05-15 Microsoft Corporation Digital ink annotation process and system for recognizing, anchoring and reflowing digital ink annotations
US7739602B2 (en) 2003-06-24 2010-06-15 Aol Inc. System and method for community centric resource sharing based on a publishing subscription model
US7051077B2 (en) * 2003-06-30 2006-05-23 Mx Logic, Inc. Fuzzy logic voting method and system for classifying e-mail using inputs from multiple spam classifiers
US8473532B1 (en) * 2003-08-12 2013-06-25 Louisiana Tech University Research Foundation Method and apparatus for automatic organization for computer files
US20050097120A1 (en) * 2003-10-31 2005-05-05 Fuji Xerox Co., Ltd. Systems and methods for organizing data
US7676739B2 (en) * 2003-11-26 2010-03-09 International Business Machines Corporation Methods and apparatus for knowledge base assisted annotation
CN102456075B (en) * 2003-12-31 2016-01-27 汤姆森路透社全球资源公司 Respond the method and system from the inquiry of user
US20050203899A1 (en) * 2003-12-31 2005-09-15 Anderson Steven B. Systems, methods, software and interfaces for integration of case law with legal briefs, litigation documents, and/or other litigation-support documents
US7647321B2 (en) * 2004-04-26 2010-01-12 Google Inc. System and method for filtering electronic messages using business heuristics
US8484295B2 (en) 2004-12-21 2013-07-09 Mcafee, Inc. Subscriber reputation filtering method for analyzing subscriber activity and detecting account misuse
US7953814B1 (en) * 2005-02-28 2011-05-31 Mcafee, Inc. Stopping and remediating outbound messaging abuse
US7680890B1 (en) 2004-06-22 2010-03-16 Wei Lin Fuzzy logic voting method and system for classifying e-mail using inputs from multiple spam classifiers
WO2006033055A2 (en) * 2004-09-21 2006-03-30 Koninklijke Philips Electronics N.V. Method of providing compliance information
US9015472B1 (en) 2005-03-10 2015-04-21 Mcafee, Inc. Marking electronic messages to indicate human origination
US8738708B2 (en) * 2004-12-21 2014-05-27 Mcafee, Inc. Bounce management in a trusted communication network
US9160755B2 (en) 2004-12-21 2015-10-13 Mcafee, Inc. Trusted communication network
EP1846882A1 (en) * 2005-01-28 2007-10-24 Thomson Global Resources Systems, methods, and software for integration of case law, legal briefs, and/or litigation documents into law firm workflow
US7499591B2 (en) * 2005-03-25 2009-03-03 Hewlett-Packard Development Company, L.P. Document classifiers and methods for document classification
US20070078889A1 (en) * 2005-10-04 2007-04-05 Hoskinson Ronald A Method and system for automated knowledge extraction and organization
CN101454776A (en) * 2005-10-04 2009-06-10 汤姆森环球资源公司 Systems, methods, and software for identifying relevant legal documents
US9177050B2 (en) * 2005-10-04 2015-11-03 Thomson Reuters Global Resources Systems, methods, and interfaces for extending legal search results
US7917519B2 (en) * 2005-10-26 2011-03-29 Sizatola, Llc Categorized document bases
US7529748B2 (en) * 2005-11-15 2009-05-05 Ji-Rong Wen Information classification paradigm
CN100419753C (en) * 2005-12-19 2008-09-17 株式会社理光 Method and device for digital data central searching target file according to classified information
US8726144B2 (en) * 2005-12-23 2014-05-13 Xerox Corporation Interactive learning-based document annotation
US7333965B2 (en) * 2006-02-23 2008-02-19 Microsoft Corporation Classifying text in a code editor using multiple classifiers
KR100717401B1 (en) * 2006-03-02 2007-05-11 삼성전자주식회사 Method and apparatus for normalizing voice feature vector by backward cumulative histogram
US7735010B2 (en) * 2006-04-05 2010-06-08 Lexisnexis, A Division Of Reed Elsevier Inc. Citation network viewer and method
MX2008014893A (en) * 2006-05-23 2009-05-28 David P Gold System and method for organizing, processing and presenting information.
JP2008070958A (en) * 2006-09-12 2008-03-27 Sony Corp Information processing device and method, and program
JP4910582B2 (en) * 2006-09-12 2012-04-04 ソニー株式会社 Information processing apparatus and method, and program
US20080071803A1 (en) * 2006-09-15 2008-03-20 Boucher Michael L Methods and systems for real-time citation generation
US7844899B2 (en) * 2007-01-24 2010-11-30 Dakota Legal Software, Inc. Citation processing system with multiple rule set engine
US20080235258A1 (en) * 2007-03-23 2008-09-25 Hyen Vui Chung Method and Apparatus for Processing Extensible Markup Language Security Messages Using Delta Parsing Technology
US9323827B2 (en) * 2007-07-20 2016-04-26 Google Inc. Identifying key terms related to similar passages
DE102007034505A1 (en) * 2007-07-24 2009-01-29 Hella Kgaa Hueck & Co. Method and device for traffic sign recognition
CN100583101C (en) * 2008-06-12 2010-01-20 昆明理工大学 Text categorization feature selection and weight computation method based on field knowledge
US10354229B2 (en) * 2008-08-04 2019-07-16 Mcafee, Llc Method and system for centralized contact management
US8352857B2 (en) * 2008-10-27 2013-01-08 Xerox Corporation Methods and apparatuses for intra-document reference identification and resolution
WO2010141480A2 (en) 2009-06-01 2010-12-09 West Services Inc. Advanced features, service and displays of legal and regulatory information
WO2010141799A2 (en) 2009-06-05 2010-12-09 West Services Inc. Feature engineering and user behavior analysis
US8572084B2 (en) * 2009-07-28 2013-10-29 Fti Consulting, Inc. System and method for displaying relationships between electronically stored information to provide classification suggestions via nearest neighbor
CA2772082C (en) 2009-08-24 2019-01-15 William C. Knight Generating a reference set for use during document review
US10146864B2 (en) * 2010-02-19 2018-12-04 The Bureau Of National Affairs, Inc. Systems and methods for validation of cited authority
EP2583204A4 (en) 2010-06-15 2014-03-12 Thomson Reuters Scient Inc System and method for citation processing, presentation and transport for validating references
US8195458B2 (en) * 2010-08-17 2012-06-05 Xerox Corporation Open class noun classification
CN102033949B (en) * 2010-12-23 2012-02-29 南京财经大学 Correction-based K nearest neighbor text classification method
US9122666B2 (en) 2011-07-07 2015-09-01 Lexisnexis, A Division Of Reed Elsevier Inc. Systems and methods for creating an annotation from a document
US9305082B2 (en) 2011-09-30 2016-04-05 Thomson Reuters Global Resources Systems, methods, and interfaces for analyzing conceptually-related portions of text
WO2013123182A1 (en) * 2012-02-17 2013-08-22 The Trustees Of Columbia University In The City Of New York Computer-implemented systems and methods of performing contract review
US9058308B2 (en) 2012-03-07 2015-06-16 Infosys Limited System and method for identifying text in legal documents for preparation of headnotes
US9201876B1 (en) * 2012-05-29 2015-12-01 Google Inc. Contextual weighting of words in a word grouping
US8955127B1 (en) * 2012-07-24 2015-02-10 Symantec Corporation Systems and methods for detecting illegitimate messages on social networking platforms
CN103577462B (en) * 2012-08-02 2018-10-16 北京百度网讯科技有限公司 A kind of Document Classification Method and device
JP5526209B2 (en) * 2012-10-09 2014-06-18 株式会社Ubic Forensic system, forensic method, and forensic program
JP5823943B2 (en) * 2012-10-10 2015-11-25 株式会社Ubic Forensic system, forensic method, and forensic program
US9083729B1 (en) 2013-01-15 2015-07-14 Symantec Corporation Systems and methods for determining that uniform resource locators are malicious
US9189540B2 (en) * 2013-04-05 2015-11-17 Hewlett-Packard Development Company, L.P. Mobile web-based platform for providing a contextual alignment view of a corpus of documents
US20150026104A1 (en) * 2013-07-17 2015-01-22 Christopher Tambos System and method for email classification
JP2015060581A (en) * 2013-09-20 2015-03-30 株式会社東芝 Keyword extraction device, method and program
CN103500158A (en) * 2013-10-08 2014-01-08 北京百度网讯科技有限公司 Method and device for annotating electronic document
US10552459B2 (en) 2013-10-31 2020-02-04 Micro Focus Llc Classifying a document using patterns
US10255646B2 (en) 2014-08-14 2019-04-09 Thomson Reuters Global Resources (Trgr) System and method for implementation and operation of strategic linkages
US20160048510A1 (en) * 2014-08-14 2016-02-18 Thomson Reuters Global Resources (Trgr) System and method for integration and operation of analytics with strategic linkages
US10572877B2 (en) * 2014-10-14 2020-02-25 Jpmorgan Chase Bank, N.A. Identifying potentially risky transactions
US9652627B2 (en) * 2014-10-22 2017-05-16 International Business Machines Corporation Probabilistic surfacing of potentially sensitive identifiers
US20160162576A1 (en) * 2014-12-05 2016-06-09 Lightning Source Inc. Automated content classification/filtering
US20160314184A1 (en) * 2015-04-27 2016-10-27 Google Inc. Classifying documents by cluster
JP5887455B2 (en) * 2015-09-08 2016-03-16 株式会社Ubic Forensic system, forensic method, and forensic program
US9852337B1 (en) * 2015-09-30 2017-12-26 Open Text Corporation Method and system for assessing similarity of documents
WO2017066746A1 (en) * 2015-10-17 2017-04-20 Ebay Inc. Generating personalized user recommendations using word vectors
CN106874291A (en) * 2015-12-11 2017-06-20 北京国双科技有限公司 The processing method and processing device of text classification
RU2742824C2 (en) * 2016-03-31 2021-02-11 БИТДЕФЕНДЕР АйПиАр МЕНЕДЖМЕНТ ЛТД Systems and methods of devices automatic detection
US11347777B2 (en) * 2016-05-12 2022-05-31 International Business Machines Corporation Identifying key words within a plurality of documents
AU2017274558B2 (en) 2016-06-02 2021-11-11 Nuix North America Inc. Analyzing clusters of coded documents
WO2017216627A1 (en) 2016-06-16 2017-12-21 Thomson Reuters Global Resources Unlimited Company Scenario analytics system
US10146758B1 (en) * 2016-09-30 2018-12-04 Amazon Technologies, Inc. Distributed moderation and dynamic display of content annotations
US10325409B2 (en) * 2017-06-16 2019-06-18 Microsoft Technology Licensing, Llc Object holographic augmentation
CN107657284A (en) * 2017-10-11 2018-02-02 宁波爱信诺航天信息有限公司 A kind of trade name sorting technique and system based on Semantic Similarity extension
CN110390094B (en) * 2018-04-20 2023-05-23 伊姆西Ip控股有限责任公司 Method, electronic device and computer program product for classifying documents
US11087088B2 (en) * 2018-09-25 2021-08-10 Accenture Global Solutions Limited Automated and optimal encoding of text data features for machine learning models
US11862305B1 (en) 2019-06-05 2024-01-02 Ciitizen, Llc Systems and methods for analyzing patient health records
US11424012B1 (en) * 2019-06-05 2022-08-23 Ciitizen, Llc Sectionalizing clinical documents
US11170271B2 (en) * 2019-06-26 2021-11-09 Dallas Limetree, LLC Method and system for classifying content using scoring for identifying psychological factors employed by consumers to take action
US11636117B2 (en) 2019-06-26 2023-04-25 Dallas Limetree, LLC Content selection using psychological factor vectors
CN110377742A (en) * 2019-07-23 2019-10-25 腾讯科技(深圳)有限公司 Text classification evaluating method, device, readable storage medium storing program for executing and computer equipment
AU2021307783A1 (en) * 2020-07-14 2023-02-16 Thomson Reuters Enterprise Centre Gmbh Systems and methods for the automatic categorization of text
US11775592B2 (en) * 2020-08-07 2023-10-03 SECURITI, Inc. System and method for association of data elements within a document
US11941497B2 (en) * 2020-09-30 2024-03-26 Alteryx, Inc. System and method of operationalizing automated feature engineering
US11782957B2 (en) 2021-04-08 2023-10-10 Grail, Llc Systems and methods for automated classification of a document

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US583120A (en) * 1897-05-25 Soldeeing machine
US5054093A (en) * 1985-09-12 1991-10-01 Cooper Leon N Parallel, multi-unit, adaptive, nonlinear pattern class separator and identifier
US5157783A (en) 1988-02-26 1992-10-20 Wang Laboratories, Inc. Data base system which maintains project query list, desktop list and status of multiple ongoing research projects
US4961152A (en) * 1988-06-10 1990-10-02 Bolt Beranek And Newman Inc. Adaptive computing system
US5488725A (en) 1991-10-08 1996-01-30 West Publishing Company System of document representation retrieval by successive iterated probability sampling
US5265065A (en) 1991-10-08 1993-11-23 West Publishing Company Method and apparatus for information retrieval from a database by replacing domain specific stemmed phases in a natural language to create a search query
US5383120A (en) * 1992-03-02 1995-01-17 General Electric Company Method for tagging collocations in text
US5438629A (en) * 1992-06-19 1995-08-01 United Parcel Service Of America, Inc. Method and apparatus for input classification using non-spherical neurons
US5497317A (en) 1993-12-28 1996-03-05 Thomson Trading Services, Inc. Device and method for improving the speed and reliability of security trade settlements
US5434932A (en) 1994-07-28 1995-07-18 West Publishing Company Line alignment apparatus and process
EP0823090B1 (en) * 1995-04-27 2005-01-26 Northrop Grumman Corporation Adaptive filtering neural network classifier
US5918240A (en) * 1995-06-28 1999-06-29 Xerox Corporation Automatic method of extracting summarization using feature probabilities
US5778397A (en) * 1995-06-28 1998-07-07 Xerox Corporation Automatic method of generating feature probabilities for automatic extracting summarization
DE19526264A1 (en) * 1995-07-19 1997-04-10 Daimler Benz Ag Process for creating descriptors for the classification of texts
US5644720A (en) 1995-07-31 1997-07-01 West Publishing Company Interprocess communications interface for managing transaction requests
JP3040945B2 (en) 1995-11-29 2000-05-15 松下電器産業株式会社 Document search device
AU5359498A (en) * 1996-11-22 1998-06-10 T-Netix, Inc. Subword-based speaker verification using multiple classifier fusion, with channel, fusion, model, and threshold adaptation
JPH1185797A (en) * 1997-09-01 1999-03-30 Canon Inc Automatic document classification device, learning device, classification device, automatic document classification method, learning method, classification method and storage medium
US6052657A (en) 1997-09-09 2000-04-18 Dragon Systems, Inc. Text segmentation and identification of topic using language models
JP3571231B2 (en) * 1998-10-02 2004-09-29 日本電信電話株式会社 Automatic information classification method and apparatus, and recording medium recording automatic information classification program
AU1122100A (en) * 1998-10-30 2000-05-22 Justsystem Pittsburgh Research Center, Inc. Method for content-based filtering of messages by analyzing term characteristicswithin a message
JP2000222431A (en) * 1999-02-03 2000-08-11 Mitsubishi Electric Corp Document classifying device
CA2371688C (en) 1999-05-05 2008-09-09 West Publishing Company D/B/A West Group Document-classification system, method and software
JP2001034622A (en) * 1999-07-19 2001-02-09 Nippon Telegr & Teleph Corp <Ntt> Document sorting method and its device, and recording medium recording document sorting program
AU764415B2 (en) * 1999-08-06 2003-08-21 Lexis-Nexis System and method for classifying legal concepts using legal topic scheme
SG89289A1 (en) * 1999-08-14 2002-06-18 Kent Ridge Digital Labs Classification by aggregating emerging patterns
US6651058B1 (en) * 1999-11-15 2003-11-18 International Business Machines Corporation System and method of automatic discovery of terms in a document that are relevant to a given target topic
US7565403B2 (en) * 2000-03-16 2009-07-21 Microsoft Corporation Use of a bulk-email filter within a system for classifying messages for urgency or importance
US20020099730A1 (en) * 2000-05-12 2002-07-25 Applied Psychology Research Limited Automatic text classification system
US6751600B1 (en) * 2000-05-30 2004-06-15 Commerce One Operations, Inc. Method for automatic categorization of items
US6782377B2 (en) * 2001-03-30 2004-08-24 International Business Machines Corporation Method for building classifier models for event classes via phased rule induction
US7295965B2 (en) * 2001-06-29 2007-11-13 Honeywell International Inc. Method and apparatus for determining a measure of similarity between natural language sentences
WO2003014975A1 (en) * 2001-08-08 2003-02-20 Quiver, Inc. Document categorization engine
US7062498B2 (en) 2001-11-02 2006-06-13 Thomson Legal Regulatory Global Ag Systems, methods, and software for classifying text from judicial opinions and other documents

Also Published As

Publication number Publication date
DK1464013T3 (en) 2009-05-18
CA2737943C (en) 2013-07-02
US7062498B2 (en) 2006-06-13
AU2009202974A1 (en) 2009-08-13
NZ533105A (en) 2006-09-29
CN1701324A (en) 2005-11-23
CA2737943A1 (en) 2003-05-15
WO2003040875A3 (en) 2003-08-07
US7580939B2 (en) 2009-08-25
JP2013178851A (en) 2013-09-09
DE60231005D1 (en) 2009-03-12
JP4342944B2 (en) 2009-10-14
US20030101181A1 (en) 2003-05-29
EP2012240A1 (en) 2009-01-07
JP5392904B2 (en) 2014-01-22
JP2005508542A (en) 2005-03-31
EP1464013A2 (en) 2004-10-06
AU2002350112A1 (en) 2003-05-19
AU2009202974B2 (en) 2012-07-19
AU2002350112B2 (en) 2009-04-23
CA2470299A1 (en) 2003-05-15
CA2470299C (en) 2011-04-26
CN1701324B (en) 2011-11-02
US20060010145A1 (en) 2006-01-12
JP2009163771A (en) 2009-07-23
ES2321075T3 (en) 2009-06-02
AU2002350112B8 (en) 2009-04-30
WO2003040875A2 (en) 2003-05-15
US20100114911A1 (en) 2010-05-06
ATE421730T1 (en) 2009-02-15

Similar Documents

Publication Publication Date Title
EP1464013B1 (en) Systems, methods, and software for classifying documents
US6778941B1 (en) Message and user attributes in a message filtering method and system
US8239335B2 (en) Data classification using machine learning techniques
US8374977B2 (en) Methods and systems for transductive data classification
Nigam et al. Text classification from labeled and unlabeled documents using EM
US6401086B1 (en) Method for automatically generating a summarized text by a computer
US20140207717A1 (en) Data classification using machine learning techniques
US20050120019A1 (en) Method and apparatus for the automatic identification of unsolicited e-mail messages (SPAM)
US20060089924A1 (en) Document categorisation system
US20080086432A1 (en) Data classification methods using machine learning techniques
CN107220295A (en) A kind of people&#39;s contradiction reconciles case retrieval and mediation strategy recommends method
CN109062895B (en) Intelligent semantic processing method
Ko et al. Issues and empirical results for improving text classification
US20050198059A1 (en) Database and database management system
CN110598192A (en) Text feature reduction method based on neighborhood rough set
CN109977269B (en) Data self-adaptive fusion method for XML file
Liu et al. Text classification using sentential frequent itemsets
Doan A fuzzy-based approach for text representation in text categorization
Girgis et al. A feature selection and classification technique for text categorization
Srikanth et al. LCC at TRECVID 2005.
JP2004310199A (en) Document sorting method and document sort program
CN111858887A (en) Community question-answering system for airport service

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20040602

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

17Q First examination report despatched

Effective date: 20040902

17Q First examination report despatched

Effective date: 20040902

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: THOMSON REUTERS GLOBAL RESOURCES

RIN1 Information on inventor provided before grant (corrected)

Inventor name: TYRELL, ALEX

Inventor name: JACKSON, PETER

Inventor name: TRAVERS, TIMOTHY EARL

Inventor name: AL-KOFAHI, KHALID

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 60231005

Country of ref document: DE

Date of ref document: 20090312

Kind code of ref document: P

REG Reference to a national code

Ref country code: SE

Ref legal event code: TRGR

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: BRAUNPAT BRAUN EDER AG

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2321075

Country of ref document: ES

Kind code of ref document: T3

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090622

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

26N No opposition filed

Effective date: 20091022

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090421

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090422

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 14

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 15

REG Reference to a national code

Ref country code: CH

Ref legal event code: PCOW

Free format text: NEW ADDRESS: NEUHOFSTRASSE 1, 6340 BAAR (CH)

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 16

REG Reference to a national code

Ref country code: CH

Ref legal event code: PCAR

Free format text: NEW ADDRESS: HOLEESTRASSE 87, 4054 BASEL (CH)

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 17

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 60231005

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G06F0017300000

Ipc: G06F0016000000

REG Reference to a national code

Ref country code: CH

Ref legal event code: PCOW

Free format text: NEW ADDRESS: LANDIS + GYR-STRASSE 3, 6300 ZUG (CH)

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: THOMAS KRETSCHMER THOMSON REUTERS GLOBAL RESOU, CH

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: IPRIME RENTSCH KAELIN AG, CH

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 60231005

Country of ref document: DE

Representative=s name: IPRIME BONNEKAMP SPARING PATENTANWALTSGESELLSC, DE

Ref country code: DE

Ref legal event code: R082

Ref document number: 60231005

Country of ref document: DE

Representative=s name: IPRIME HUHN SPARING PATENTANWALTSGESELLSCHAFT , DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: PUE

Owner name: THOMSON REUTERS ENTERPRISE CENTRE GMBH, CH

Free format text: FORMER OWNER: THOMSON REUTERS GLOBAL RESOURCES UNLIMITED COMPANY, CH

REG Reference to a national code

Ref country code: ES

Ref legal event code: PC2A

Owner name: THOMSON REUTERS ENTERPRISE CENTRE GMBH

Effective date: 20200325

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20200402 AND 20200408

REG Reference to a national code

Ref country code: LU

Ref legal event code: PD

Owner name: THOMSON REUTERS ENTERPRISE CENTRE GMBH; CH

Free format text: FORMER OWNER: THOMSON REUTERS GLOBAL RESOURCES

Effective date: 20200508

REG Reference to a national code

Ref country code: NL

Ref legal event code: PD

Owner name: THOMSON REUTERS ENTERPRISE CENTRE GMBH; CH

Free format text: DETAILS ASSIGNMENT: CHANGE OF OWNER(S), ASSIGNMENT; FORMER OWNER NAME: THOMSON REUTERS GLOBAL RESOURCES

Effective date: 20200608

REG Reference to a national code

Ref country code: BE

Ref legal event code: PD

Owner name: THOMSON REUTERS ENTERPRISE CENTRE GMBH; CH

Free format text: DETAILS ASSIGNMENT: CHANGE OF OWNER(S), CESSION

Effective date: 20200515

Ref country code: DE

Ref legal event code: R081

Ref document number: 60231005

Country of ref document: DE

Owner name: THOMSON REUTERS ENTERPRISE CENTRE GMBH, CH

Free format text: FORMER OWNER: THOMSON REUTERS GLOBAL RESOURCES, ZUG, CH

Ref country code: DE

Ref legal event code: R082

Ref document number: 60231005

Country of ref document: DE

Representative=s name: IPRIME HUHN SPARING PATENTANWALTSGESELLSCHAFT , DE

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20210928

Year of fee payment: 20

Ref country code: FR

Payment date: 20210915

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20210922

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: ES

Payment date: 20211206

Year of fee payment: 20

Ref country code: DK

Payment date: 20211109

Year of fee payment: 20

Ref country code: IE

Payment date: 20211012

Year of fee payment: 20

Ref country code: LU

Payment date: 20211025

Year of fee payment: 20

Ref country code: SE

Payment date: 20211012

Year of fee payment: 20

Ref country code: DE

Payment date: 20210923

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20211012

Year of fee payment: 20

Ref country code: CH

Payment date: 20211014

Year of fee payment: 20

Ref country code: BE

Payment date: 20211018

Year of fee payment: 20

REG Reference to a national code

Ref country code: DE

Ref legal event code: R071

Ref document number: 60231005

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MK

Effective date: 20221031

REG Reference to a national code

Ref country code: DK

Ref legal event code: EUP

Expiry date: 20221101

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20221031

REG Reference to a national code

Ref country code: ES

Ref legal event code: FD2A

Effective date: 20221125

REG Reference to a national code

Ref country code: BE

Ref legal event code: MK

Effective date: 20221101

Ref country code: IE

Ref legal event code: MK9A

REG Reference to a national code

Ref country code: SE

Ref legal event code: EUG

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20221101

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20221031

Ref country code: ES

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20221102