WO2002073453A1 - A trainable sentence planning system - Google Patents

A trainable sentence planning system Download PDF

Info

Publication number
WO2002073453A1
WO2002073453A1 PCT/US2002/007235 US0207235W WO02073453A1 WO 2002073453 A1 WO2002073453 A1 WO 2002073453A1 US 0207235 W US0207235 W US 0207235W WO 02073453 A1 WO02073453 A1 WO 02073453A1
Authority
WO
WIPO (PCT)
Prior art keywords
sentence
user
communicative
plan
generator
Prior art date
Application number
PCT/US2002/007235
Other languages
French (fr)
Other versions
WO2002073453A8 (en
Inventor
Marilyn A. Walker
Owen Christopher Rambow
Monica Rogati
Original Assignee
At & T Corp.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by At & T Corp. filed Critical At & T Corp.
Priority to US10/258,776 priority Critical patent/US7729918B2/en
Publication of WO2002073453A1 publication Critical patent/WO2002073453A1/en
Publication of WO2002073453A8 publication Critical patent/WO2002073453A8/en
Priority to US12/789,883 priority patent/US8185401B2/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/027Concept to speech synthesisers; Generation of natural phrases from machine-based concepts
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • H04M3/4936Speech interaction details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/35Aspects of automatic or semi-automatic exchanges related to information services provided via a voice call
    • H04M2203/355Interactive dialogue design tools, features or methods

Definitions

  • Fig. 2 illustrates an exemplary sentence planning unit
  • FIG. 7 illustrates an alternative eight DSYNT structure
  • the sentence planning unit 120 may include a communicative goal generator 210, a sentence plan generator 220 and a sentence plan ranker 230.
  • the sentence plan generator 220 also receives input from the discourse history database 150 and the sentence plan ranker 230 also receives input from the training database 160.
  • communicative goal generator 210 may or may not be physically located in the sentence planning unit 120, or even be a part of the sentence planning system 100, within the spirit and scope of the invention, for ease of discussion, the communicative goal generator 210 will be discussed as being part of the sentence planning unit 120.
  • the sentence planning process may include two distinct phases performed by the sentence plan generator 220 and the sentence plan ranker 230, respectively.
  • the sentence plan generator 210 generates a potentially large sample of possible sentence plans for a given set of communicative goals generated by the communicative goal generator 210.
  • the sentence-plan-ranker 220 ranks the sample sentence plans and then selects the top ranked plan to input to the realization unit 130.
  • the sentence plan ranker 230 may use rules automatically learned from training data stored in the training database 160, using techniques similar to those well-known to one of ordinary skill in the art.
  • the trained sentence plan ranker 230 may learn to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. To further illustrate this, the sentence planning process, as well as the detailed descriptions of the sentence plan generator 220 and the sentence plan ranker 230, is set forth below.
  • the realized sentence plan is converted from text to speech by the text-to-speech unit 140 and is output to the user in step 3070.
  • the text- to-speech unit 140 may be a text-to-speech engine known to those of skill in the art, such as that embodied in the AT&T NextGen TTS system, and possibly trained with lexical items specific to the domain of the sentence planning system 100.
  • the device that outputs the converted sentence may be any device capable of producing verbal and/or non-verbal communications, such as a speaker, transducer, TV screen, CRT, or any other output device known to those of ordinary skill in the art. If the output includes speech, the automated speech may be produced by a voice synthesizer, voice recordings, or any other method or device capable of automatically producing audible sound known to those of ordinary skill in the art.
  • the process then goes to step 3080 and ends.
  • the role of the sentence planning system 100 is to choose abstract lexico-structural realizations for a set of communicative goals generated by the communicative goal generator 210.
  • the output of the above-described text-to- speech unit 140 provides the input back to the sentence planning system 100 in the form of a single spoken dialog text plan for each interaction between the system and the user.
  • a PERIOD node results in a DSyntS headed by a period and whose daughters are the two daughter DSyntSs.
  • the DSyntS for the entire user interaction is associated with the root node.
  • This DSyntS can be sent to the realization unit 130, which outputs a single sentence or several sentences if the DSyntS contains period nodes.
  • Fig. 8 shows some of the rules that were learned on the training data that were then applied to the alternative sentence plans in each test set of each fold in order to rank them. Only a subset of the rules that had the largest impact on the score of each sp-tree is listed. Some particular rule examples are discussed here to help in understanding how the sentence plan ranker 230 operates. However, different thresholds and feature values may be used within the spirit and scope of the invention.
  • the recognizer 920 and the NLU unit 930 may operate using one or more of a variety of recognition and understanding algorithms. For example, the recognizer 920 and the NLU unit 930 may use confidence functions to determine whether the user's input communications have been recognized and understood. The recognition and understanding data from the user's input communication may be used by the NLU unit 930 to calculate a probability that the language is understood clearly and this may be used in conjunction with other mechanisms like recognition confidence scores to decide whether and/or how to further process the user's communication.
  • the dialog manager/task classification processor 940 may be used to solicit clarifying information from the user in order to clear up any system misunderstanding. As a result, if the user's input communication can be satisfactorily recognized by the recognizer 920, understood by the NLU unit 930, and no further information from the user is needed, the dialog manager/task classification processor 940 routes and/or processes the user's input communication, which may include a request, comment, etc. However, if the NLU unit 930 recognizes errors in the understanding of the user's input communication such that if it cannot be satisfactorily recognized and understood, the dialog manager/task classification processor 940 may conduct dialog with the user for clarification and confirmation purposes.
  • the dialog manager/task classification processor 940 also may determine whether all of the communicative goals have been satisfied. Therefore, once the system has collected all of the necessary information from the user, the dialog manager/task classification processor 940 may classify and route any request or task received from the user so that it may be completed or processed by another system,, unit, etc. Alternatively, the dialog manager/task classification processor 940 may process, classify or complete the task itself.
  • Fig. 10 illustrates an exemplary sentence planning process in the task classification system 900.
  • the process begins at step 10005 and proceeds to step 10010 where the recognizer 920 receives an input communication from the user recognizes symbols from the user's input communications using a recognition algorithm known to those of skill in the art.
  • step 10015 recognized symbols are input to the NLU unit 930 where an understanding algorithm may be applied to the recognized symbols as known to those of skill in the art.
  • step 10035 the communicative goal generator 210 (or the dialog manager/task classification processor 940) calculates the communicative goals of the particular transaction with the user using the recognition and understanding data.
  • the communicative goal generator 210 transfers the calculated communicative goals along with the recognition and understanding data to the sentence planning unit 120.
  • sentence plans are generated by the sentence plan generator 220 using input from the discourse history database 150.
  • the generated sentence plans are ranked by the sentence planning ranker 230.
  • the method of this invention may be implemented using a programmed processor.
  • the method can also be implemented on a general-purpose or a special purpose computer, a programmed microprocessor or microcontroller, peripheral integrated circuit elements, an application-specific integrated circuit (ASIC) or other integrated circuits, hardware/electronic logic circuits, such as a discrete element circuit, a programmable logic device, such as a PLD, PLA, FPGA, or PAL, or the like.
  • ASIC application- specific integrated circuit
  • any device on which the finite state machine capable of implementing the flowcharts shown in Figs. 3 and 10 can be used to implement the functions of this invention.

Abstract

The invention relates to a system that interacts with a user in an automated dialog system (100). The system may include a communicative goal generator (210) that generates communicative goals based on a first communication received from the user. The generated communicative goals (210) may be related to information needed to be obtained from the user. The system may further include a sentence planning unit (220) that automatically plans one or more sentences based on the communicative goals generated by the communicative goal generator (210). At least one of the planned sentences may be then output to the user (230).

Description

A TRAINABLE SENTENCE PLANNING SYSTEM
CLAIM FOR PRIORITY/CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This non-provisional application claims the benefit of U.S. Provisional
Patent Application No. 60/275.653. filed March 14, 2001 , which is incorporated by reference in its entirety.
TECHNICAL FIELD
[0002] This invention relates to automated systems for communication recognition and understanding.
BACKGROUND OF THE INVENTION
[0003] The past several years have seen a large increase in commercial spoken dialog systems. These systems typically utilize system-initiative dialog strategies. The system utterances are highly scripted for style and then recorded by voice talent. However several factors argue against the continued use of these simple techniques for producing the system side of the conversation. First, the quality of text-to-speech systems has improved to the point of being a viable alternative to prerecorded prompts. Second, there is a perceived need for spoken dialog systems to be more flexible and support user initiative. However, this factor also requires greater flexibility for system utterance generation. Finally, there are dialog systems that support complex planning currently under development, and these systems are likely to require more sophisticated system output than current technologies will be able to provide.
SUMMARY OF THE INVENTION
[0004] The invention relates to a system that interacts with a user in an automated dialog system. The system may include a communicative goal generator that generates communicative goals based on a first communication received from the user. The generated communicative goals may be related to information needed to be obtained from the user. The system may further include a sentence planning unit that automatically plans one or more sentences based on the communicative goals generated by the communicative goal generator. At least one of the planned sentences may be then output to the user.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] Fig. 1 illustrates an exemplary sentence planning system;
[0006] Fig. 2 illustrates an exemplary sentence planning unit;
[0007] Fig. 3 illustrates an exemplary sentence planning system process;
[0008] Fig. 4 illustrates a list of clause combining operations with examples;
[0009] Fig. 5 illustrates an alternative zero planning tree;
[0010] Fig. 6 illustrates an alternative eight sentence planning tree;
[0011 ] Fig. 7 illustrates an alternative eight DSYNT structure;
[0012] Fig. 8 illustrates rules with the largest impact on the final rank booster score; [0013] Fig. 9 illustrates an exemplary task classification system; and
[0014] Fig. 10 illustrates an exemplary task classification process.
DETAILED DESCRIPTION
[0015] Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping. Sentence scoping relates to the choice of syntactic structure for elementary speech acts and the decisions concerning how to combine them into sentences. For example, consider the required capabilities of a sentence planning system for a mixed-initiative spoken dialog system for travel planning in the sample dialog below:
1) System: Welcome... What airport would you like to fly out of?
2) User: I need to go to Columbus.
3) System: Flying to Columbus. What departure airport was that?
4) User: From Washington on September the 6th.
5) System: What time would you like to travel on September the 6th to Columbus from Washington?
[0016] In utterance 1), the system requests information about the user's departure airport, but in the user's response 2), the user takes the initiative to provide information about a destination. In the system's utterance 3), the system's goal is to implicitly confirm the destination (because of the possibility of error in the speech recognition component), and to also request information (for the second time) concerning the caller's departure airport. In the user's response 4), the caller provides the requested information but also provides the month and day of travel. Given the system's dialog strategy, the communicative goals for the system's utterance 5) are to implicitly confirm all of the information that the user has provided so far, i.e., the departure and destination cities, and the month and day of travel, as well as to request information about the time of travel. The system's representation of its communicative goals for the system's utterances is illustrated in Table 1 below:
implicit-confirm(orig-city: WASHINGTON)
implicit-confirm(dest-city: COLUMBUS)
implicit-confirm(month: 9)
implicit-confirm(day-number: 6)
request(depart-time: whatever)
Table 1 : The Communicative Goals for System Utterance 5. Above.
[0017] An important job for the sentence planning system is to decide among the large number of potential realizations of these communicative goals. Some example alternative realizations are found in Table 2 below:
AJt Realization H MLP
0 What time would you like to travel on September 5 .85
6th to Columbus from Washington?
5 Leaving on September 6th. What time would you 4.5 .82 like to travel from Washington to Columbus? 8 Leaving in September. Leaving on the 6th. What 2 .39 time would you, travelling from Washington to
Columbus, like to leave?
Table 2: Alternative Sentence Plan Realizations for the Communicative Goals for System Utterance 5 in the Sample Dialog. Above
[0018] Fig. 1 illustrates an exemplary sentence planning system 100 which may be used in the above sentence planning scenario as well as in many other various applications, including customer care, service or parts ordering, travel arrangements bookings, location/map information, etc. As shown in the figure, the sentence planning system 100 may include a sentence planning unit 120, a realization unit 130, a text-to- speech unit 140, a discourse history database 150, and a training database 160. [0019] The sentence planning system 100 receives input recognition and understanding data from a dialog system that is based on input communications from the user. The dialog system may be any system that may be trained to recognize and understand any number of communication symbols, both acoustic and non-acoustic, including grammar fragments, meaningful words, meaningful phrases, meaningful phrase clusters, superwords, morphemes, multimodal signals, etc., using any of the methods known to one skilled in the art including those found in U.S. Patent Nos. 5,675,707, 5,860,063 and 6,044,337, and U.S. Patent Application Nos. 08/943,944, 09/712,192 and 09/712,194, which are hereby incorporated by reference in their entirety. For example, the dialog system may operate using one or more of a variety of recognition and understanding algorithms to determine whether the user's input communications have been recognized and understood prior to inputting data to the sentence planning system 100.
[0020] In the sentence planning system 100, the discourse history database 150 serves as a database for storing each dialog exchange for a particular dialog or set of interactions with a user. The training database 160 stores sentence planning examples collected from interactions with human users and models built based on those examples and positive and negative feedback on the quality of the examples that was provided by human users during the training phase. The training database 150 also stores the sentence planning features identified from the collected dialogs, and the sentence planning rules generated from both the dialogs and the sentence planning features. The sentence planning unit 120 exploits the training database 160 by using the dialog history stored in the discourse history database 150 to predict what sentence plan to generate for the current user interaction.
[0021] While the discourse history database 150 and the training database 160 are shown as separate databases in the exemplary embodiments, the dialog history and training data may be stored in the same database or memory, for example. In any case, any of the databases or memories used by the sentence planning system 100 may be stored external or internal to the system 100. [0022] Fig. 2 is a more detailed diagram of an exemplary sentence planning unit
120 shown in Fig. 1. The sentence planning unit 120 may include a communicative goal generator 210, a sentence plan generator 220 and a sentence plan ranker 230. The sentence plan generator 220 also receives input from the discourse history database 150 and the sentence plan ranker 230 also receives input from the training database 160.
[0023] The communicative goal generator 210 applies a particular dialog strategy to determine what the communicative goals should be for the system's next dialog turn. Although shown in Fig. 2 as part of the sentence planning unit 120, in another exemplary embodiment (shown by the dotted line), the communicative goal generator 210 may be separate from the sentence planning unit 120 and as such, may be a component of a dialog manager for an automated dialog system, for example (e.g., see Fig. 9). While traditional dialog managers used in conventional spoken dialog systems express communicative goals by looking up string templates that realize these goals and then simply pass the strings to a text-to-speech engine, the communicative goal generator 210 in the present invention generates semantic representations of communicative goals, such as those shown in Table 1.
[0024] These semantic representations are passed to the sentence planning unit
120 that can then use linguistic knowledge and prior training to determine the best realization for these communicative goals given the current discourse context, discourse history, and user. While the communicative goal generator 210 may or may not be physically located in the sentence planning unit 120, or even be a part of the sentence planning system 100, within the spirit and scope of the invention, for ease of discussion, the communicative goal generator 210 will be discussed as being part of the sentence planning unit 120.
[0025] In order to train the sentence planning system 100, the sentence planning process may include two distinct phases performed by the sentence plan generator 220 and the sentence plan ranker 230, respectively. In the first phase, the sentence plan generator 210 generates a potentially large sample of possible sentence plans for a given set of communicative goals generated by the communicative goal generator 210. In the second phase, the sentence-plan-ranker 220 ranks the sample sentence plans and then selects the top ranked plan to input to the realization unit 130. In ranking the generated sentence plans, the sentence plan ranker 230 may use rules automatically learned from training data stored in the training database 160, using techniques similar to those well-known to one of ordinary skill in the art.
[0026] In order to train the sentence planning system 100, neither hand-crafted rules nor the existence of a corpus in the domain of the sentence planning system 100 are necessarily needed. The trained sentence plan ranker 230 may learn to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. To further illustrate this, the sentence planning process, as well as the detailed descriptions of the sentence plan generator 220 and the sentence plan ranker 230, is set forth below.
[0027] Fig. 3 illustrates an exemplary sentence planning process using the sentence planning system 100. The process begins at step 3005 and proceeds to step 3010 where the communicative goal generator 210 receives recognition and understanding data from a dialog system and calculates the communicative goals of the particular transaction with the user. In step 3020, the communicative goal generator 210 transfers the calculated communicative goals along with the recognized/understood symbols to the sentence planning generator 220. The sentence plan generator 220 uses inputs from the discourse history database 150 to generate a plurality of sentence plans. Then, in step 3030, the generated sentence plans are ranked by the sentence plan ranker 230 using a set of rules stored in the training database 160.
[0028] The process proceeds to step 3040 where the sentence plan ranker 230 selects the highest ranked sentence plan. In step 3050, the selected sentence plan is input to the realization unit 130, which may be either a rule-based or stochastic surface realizer, for example. In the realization unit 130, linguistic rules and/or linguistic knowledge, derived from being trained using an appropriate dialog corpus, are applied to generate the surface string representation. Specifically, the types of linguistic rules or knowledge that the realization unit 130 may apply may concern the appropriate irregular verb forms, subject-verb agreement, inflecting words, word order, and the application of function words. For example, in English, the indirect object of the verb "give" is matched with the function word "to" as in the sentence "Matthew GAVE the book TO Megan". Note that for ease of discussion, "linguistic rules" as described herein will be intended to encompass either or both "linguistic rules" and/or "linguistic knowledge".
[0029] Then, in step 3060, the realized sentence plan is converted from text to speech by the text-to-speech unit 140 and is output to the user in step 3070. The text- to-speech unit 140 may be a text-to-speech engine known to those of skill in the art, such as that embodied in the AT&T NextGen TTS system, and possibly trained with lexical items specific to the domain of the sentence planning system 100. The device that outputs the converted sentence may be any device capable of producing verbal and/or non-verbal communications, such as a speaker, transducer, TV screen, CRT, or any other output device known to those of ordinary skill in the art. If the output includes speech, the automated speech may be produced by a voice synthesizer, voice recordings, or any other method or device capable of automatically producing audible sound known to those of ordinary skill in the art. The process then goes to step 3080 and ends.
[0030] In general, the role of the sentence planning system 100 is to choose abstract lexico-structural realizations for a set of communicative goals generated by the communicative goal generator 210. In contrast to conventional dialog systems that simply output completely formed utterances, the output of the above-described text-to- speech unit 140 provides the input back to the sentence planning system 100 in the form of a single spoken dialog text plan for each interaction between the system and the user.
[0031] In this process, each sentence plan generated by the sentence plan generator 220 is an unordered set of elementary speech acts encoding all of the communicative goals determined by the communicative goal generator 210 for the current user interaction. As illustrated above in Table 1 , each elementary speech act is represented as a type (request, implicit confirm, explicit confirm), with type-specific parameters. The sentence planning system 100 must decide among alternative realizations of this communicative goal. Some alternative realizations are shown in Table 2, above.
[0032] As discussed above, the sentence planning task is divided by the sentence planning unit 120 into two phases. In the first phase, the sentence plan generator 220 generates 12-20 possible sentence plans, for example, for a given input communicative goal. To accomplish this, the sentence plan generator 220 assigns each speech act a canonical lexico-structural representation called a "Deep Syntactic Structure" (DSyntS). Essentially, the sentence plan is a tree that records how these elementary DSyntSs are combined into larger DSyntSs. From a sentence plan, the list of DSyntSs, each corresponding to exactly one sentence of the target communicative goal, can be read off. In the second phase, the sentence plan ranker 230 ranks sentence plans generated by the sentence plan generator 220, and then selects the top-ranked output which is then input into the realization unit 130.
[0033] In examining each of these phases, the sentence plan generator 220 performs a set of clause-combining operations that incrementally transform a list of elementary predicate-argument representations (the DSyntSs corresponding to elementary speech acts, in this case) into a list of lexico-structural representations of single sentences. As shown in Fig. 4, the sentence plan generator 220 performs this task by combining the elementary predicate-argument representations using the following combining operations:
• MERGE. Two identical main matrix verbs can be identified if they have the same arguments; the adjuncts are combined.
• MERGE-GENERAL. Same as MERGE, except that one of the two verbs may be embedded.
• SOFT-MERGE. Same as MERGE, except that the verbs need only to be in a relation of synonymy or hyperonymy (rather than being identical).
• SOFT-MERGE-GENERAL. Same as MERGE-GENERAL, except that the verbs need only to be in a relation of synonymy or hyperonymy. • CONJUNCTION. This is standard conjunction with conjunction reduction.
• RELATIVE-CLAUSE. This includes participial adjuncts to nouns.
• ADJECTIVE. This transforms a predicative use of an adjective into an adnominal construction.
• PERIOD. Joins two complete clauses with a period.
[0034] The output of the sentence plan generator 220 is a sentence plan tree (or sp-tree for short), which is a binary tree with leaves labeled by all the elementary speech acts from the input communicative goals, and with its interior nodes labeled with clause-combining operations. Each node is also associated with a DSyntS: the leaves which correspond to elementary speech acts from the input communicative goals are linked to a canonical DSyntS for that speech act by lookup in a hand-crafted dictionary, for example. The interior nodes are associated with DSyntSs by executing their clause- combing operation on their two daughter nodes. For example, a PERIOD node results in a DSyntS headed by a period and whose daughters are the two daughter DSyntSs. As a result, the DSyntS for the entire user interaction is associated with the root node. This DSyntS can be sent to the realization unit 130, which outputs a single sentence or several sentences if the DSyntS contains period nodes.
[0035] The complexity of conventional sentence planning systems arises from the attempt to encode constraints on the application and ordering of system operations in order to generate a single high-quality sentence plan. However, in the sentence planning system 100 process of the invention there is not a need to encode such constraints. Instead, the sentence plan generator 220 generates a random sample of possible sentence plans for each communicative goal generated by the communicative goal generator 210. This may be accomplished by randomly selecting among the operations according to a probability distribution. If a clause combination fails, the sentence plan generator 220 discards that sp-tree. For example, if a relative clause of a structure which already contains a period is created, it will be discarded.
[0036] Table 2 above shows some of the realizations of alternative sentence plans generated by the sentence plan generator 220 for utterance systems in the sample dialog above. Sp-trees for alternatives 0, 5 and 8 are shown in Figs. 5 and 6. For example, consider the sp-tree in Fig. 6. Node soft-merge-general merges an implicit-confirmation of the destination city and the origin city. The row labeled SOFT- MERGE in Fig. 4 shows the result of applying the soft-merge operation when Args 1 and 2 are implicit confirmations of the origin and destination cities. Fig. 7 illustrates the relationship between the sp-tree and the DSynt structure for alternative 8 from Fig. 6. The labels and arrows show the DSynt structure associated with each node in the sp- tree. The Fig. 7 diagram also shows how structures are composed into larger structures by the clause-combining operations.
[0037] The sentence plan ranker 230 takes as input a set of sentence plans generated by the sentence plan generator 220 and ranks them. As discussed above, in order to train the sentence plan ranker 230, a machine learning program may be applied to learn a set of rules for ranking sentence plans from the labeled set of sentence-plan training examples stored in the training database 160.
[0038] Examples of boosting algorithms that may be used by the sentence plan ranker 230 for ranking the generated sentence plans are described in detail below. Each example x is represented by a set of m indicator functions hs (x) for 1 < s < m. The indicator functions are calculated by thresholding the feature values (counts) described below. For example, one such indicator function might be:
Jl if LEAF-IMPLICIT-CONFIRM (x) ≥ 1 \f) otherwise
[0039] So h,00 = 1 if the number of leaf implicit confirm nodes in x > 1. A single parameter as is associated with each indicator function, and the "ranking score" for an example x is then calculated as:
Figure imgf000012_0001
[0040] The sentence plan ranker 230 uses this score to rank competing realizations of the same text plan in order of plausibility. The training examples are used to set the parameter values as. In this case, the human judgments are converted into a training set of ordered pairs of examples x, y, where x and y are candidates for the same sentence, and x is strictly preferred to y. More formally, the training set τ is:
τ = are realizations for the same text plan,
Figure imgf000013_0001
x is preferred to y by human judgments}
Thus, each text plan with 20 candidates could contribute up to (20 * 19)/2 = 190 such pairs. In practice, however, fewer pairs could be contributed due to different candidates getting tied scores from the annotators.
[0041] Training is then described as a process of setting the parameters as to minimize the following loss function:
Loss = ∑ e(FW- ( » {*.y)∞
[0042] It can be seen that as this loss function is minimized, the values for (F(x) -
F(y)) where x is preferred to y will be pushed to be positive, so that the number of ranking errors (cases where ranking scores disagree with human judgments) will tend to be reduced. Initially all parameter values are set to zero. The optimization method then picks a single parameter at a time, preferably the parameter that will make most impact on the loss function, and updates the parameter value to minimize the loss. The result is that substantial progress is typically made in minimizing the error rate, with relatively few non-zero parameter values. Consequently, under certain conditions, the combination of minimizing the loss function while using relatively few parameters leads to good generalization on test data examples. Empirical results for boosting have shown that in practice the method is highly effective.
[0043] Fig. 8 shows some of the rules that were learned on the training data that were then applied to the alternative sentence plans in each test set of each fold in order to rank them. Only a subset of the rules that had the largest impact on the score of each sp-tree is listed. Some particular rule examples are discussed here to help in understanding how the sentence plan ranker 230 operates. However, different thresholds and feature values may be used within the spirit and scope of the invention.
[0044] Rule (1) in Fig. 8 states that an implicit confirmation as the first leaf of the sp-tree leads to a large (.94) increase in the score. Thus, all three of the alternative sp- trees accrue this ranking increase. Rules (2) and (5) state that the occurrence of 2 or more PRONOUN nodes in the DsyntS reduces the ranking by 0.85, and that 3 or more PRONOUN nodes reduces the ranking by an additional 0.34. Alternative 8 is above the threshold for both of these rules; alternative 5 is above the threshold for Rule (2) and alternative 0 is never above the threshold. Rule (6) on the other hand increases only the scores of alternatives 0 and 5 by 0.33 since alternative 8 is below threshold for that feature.
[0045] Although multiple instantiations of features are provided, some of which included parameters or lexical items that might identify particular discourse contexts, most of the learned rules utilize general properties of the sp-tree and the DSyntS. This is partly due to the fact that features that appeared less than 10 times in the training data were eliminated.
[0046] Fig. 9 shows an exemplary task classification system 900 that includes the sentence planning system 100. The task classification system 900 may include a recognizer 920, an NLU unit 930, a dialog manager/task classification processor 940, a sentence planning unit 120, a realization unit 130, a text-to-speech unit 140, a discourse history database 150, and a training database 160. The functions and descriptions of the sentence planning unit 120, the realization unit 130, the text-to-speech unit 140, the discourse history database 150, and the training database 160 are set forth above and will not be repeated here.
[0047] The sentence planning unit 120 receives recognition data from the recognizer 920 and understanding data from the NLU unit 930 that are based on input communications from the user. The recognizer 920 and the NLU unit 930 are shown as separate units for clarification purposes. However, the functions of the recognizer 920 and the NLU unit 930 may be performed by a single unit within the spirit and scope of this invention.
[0048] Note that the recognizer 920 may be trained to recognize any number of communication symbols, both acoustic and non-acoustic, including grammar fragments, meaningful words, meaningful phrases, meaningful phrase clusters, superwords, morphemes, multimodal signals, etc., using any of the methods known to one skilled in the art including those found in U.S. Patent Nos. 5,675,707, 5,860,063 and 6,044,337, and U.S. Patent Application Nos. 08/943,944, 09/712,192 and 09/712,194, as discussed above.
[0049] The recognizer 920 and the NLU unit 930 may operate using one or more of a variety of recognition and understanding algorithms. For example, the recognizer 920 and the NLU unit 930 may use confidence functions to determine whether the user's input communications have been recognized and understood. The recognition and understanding data from the user's input communication may be used by the NLU unit 930 to calculate a probability that the language is understood clearly and this may be used in conjunction with other mechanisms like recognition confidence scores to decide whether and/or how to further process the user's communication.
[0050] The dialog manager/task classification processor 940 may be used to solicit clarifying information from the user in order to clear up any system misunderstanding. As a result, if the user's input communication can be satisfactorily recognized by the recognizer 920, understood by the NLU unit 930, and no further information from the user is needed, the dialog manager/task classification processor 940 routes and/or processes the user's input communication, which may include a request, comment, etc. However, if the NLU unit 930 recognizes errors in the understanding of the user's input communication such that if it cannot be satisfactorily recognized and understood, the dialog manager/task classification processor 940 may conduct dialog with the user for clarification and confirmation purposes.
[0051] The dialog manager/task classification processor 940 also may determine whether all of the communicative goals have been satisfied. Therefore, once the system has collected all of the necessary information from the user, the dialog manager/task classification processor 940 may classify and route any request or task received from the user so that it may be completed or processed by another system,, unit, etc. Alternatively, the dialog manager/task classification processor 940 may process, classify or complete the task itself.
[0052] Note that while Fig. 9 shows the dialog manager/task classification processor 940 as a single unit, the functions of the dialog manager portion and the task classification processor portion may be performed by a separate dialog manager and a separate task classification processor, respectively.
[0053] As noted above, the dialog manager/task classification processor 940 may include, or perform the functions of, the communicative goal generator 210. In this regard, the dialog manager/task classification processor 940 would determine the communicative goals based on the recognized symbols and understanding data and route the communicative goals to the sentence plan generator 220 of the sentence planning unit 120.
[0054] Fig. 10 illustrates an exemplary sentence planning process in the task classification system 900. The process begins at step 10005 and proceeds to step 10010 where the recognizer 920 receives an input communication from the user recognizes symbols from the user's input communications using a recognition algorithm known to those of skill in the art. Then, in step 10015, recognized symbols are input to the NLU unit 930 where an understanding algorithm may be applied to the recognized symbols as known to those of skill in the art.
[0055] In step 10020, the NLU unit 930 determines whether the symbols can be understood. If the symbols cannot be understood, the process proceeds to step 10025 where dialog manager/task classification processor 940 conducts dialog with the user to clarify the system's understanding. The process reverts back to step 10010 and the system waits to receive additional input from the user. [0056] However, if the symbols can be understood in step 10020, the process proceeds to step 10030 where the dialog manager/task classification processor 940 (or the communicative goal generator 210) determines whether the communicative goals in the user transaction have been met. If so, in step 10070, the dialog manager/task classification processor 940 routes the tasks from user's request to another unit for task completion, or processes the user's communication or request, itself. The process then goes to step 10070 and ends.
[0057] If the dialog manager/task classification processor 940 determines whether the communicative goals in the user transaction have not been met, the process proceeds to step 10035 where the communicative goal generator 210 (or the dialog manager/task classification processor 940) calculates the communicative goals of the particular transaction with the user using the recognition and understanding data. In step 10040, the communicative goal generator 210 transfers the calculated communicative goals along with the recognition and understanding data to the sentence planning unit 120. In the sentence planning unit 120, sentence plans are generated by the sentence plan generator 220 using input from the discourse history database 150. Then, in step 10045, the generated sentence plans are ranked by the sentence planning ranker 230.
[0058] The process proceeds to step 10050 where the sentence plan ranker 230 selects the highest ranked sentence plan. In step 10055, the selected sentence plan is input to the realization unit 130 where linguistic rules are applied. Then, in step 10060, the realized sentence plan is converted from text to speech by the text-to-speech unit 140 and is output to the user in step 10065. The process then goes to step 10070 and ends.
[0059] In the discussion herein, the terms "natural language understanding" and
"sentence planning" are used to describe the understanding of a user's communication and the automated formulation of a system response, respectively. As such, this invention is directed toward the use of any form of communications received or transmitted over the networks which may be expressed verbally, nonverbally, multimodally, etc. Examples of nonverbal communications include the use of gestures, body movements, head movements, non-responses, text, keyboard entries, keypad entries, mouse clicks, DTMF codes, pointers, stylus, cable set-top box entries, graphical user interface entries, touchscreen entries, etc. Multimodal communications involve communications on a plurality of channels, such as aural, visual, etc. However, for ease of discussion, examples and discussions of the method and system of the invention are discussed above in relation to, but not limited to, verbal systems.
[0060] Note that while the above examples illustrate the invention in a travel service system, this invention may be applied to any single mode, or multimodal, dialog system, or any other automated dialog system that interacts with a user. Furthermore, the invention may apply to any automated recognition and understanding system that receives communications from external sources, such as users, customers, service providers, associates, etc. Consequently, the method may operate in conjunction with one or more communication networks, including a telephone network, the Internet, an intranet, Cable TV network, a local area network (LAN), a wireless communication network, etc.
[0061] In addition, while the examples above concern travel service systems, the sentence planning system 100 of the invention may be used in a wide variety of systems or purposes known to those of skill in the art, including parts ordering systems, customer care systems, reservation systems (including dining, car, train, airline, bus, lodging, travel, touring, etc.), navigation systems, information collecting systems, information retrieval systems, etc., within the spirit and scope of the invention.
[0062] As shown in Figs. 1 , 2, and 9, the method of this invention may be implemented using a programmed processor. However, the method can also be implemented on a general-purpose or a special purpose computer, a programmed microprocessor or microcontroller, peripheral integrated circuit elements, an application- specific integrated circuit (ASIC) or other integrated circuits, hardware/electronic logic circuits, such as a discrete element circuit, a programmable logic device, such as a PLD, PLA, FPGA, or PAL, or the like. In general, any device on which the finite state machine capable of implementing the flowcharts shown in Figs. 3 and 10 can be used to implement the functions of this invention.
[0063] While the invention has been described with reference to the above embodiments, it is to be understood that these embodiments are purely exemplary in nature. Thus, the invention is not restricted to the particular forms shown in the foregoing embodiments. Various modifications and alterations can be made thereto without departing from the spirit and scope of the invention.

Claims

WHAT IS CLAIMED IS: 1. A system that interacts with a user using an automated dialog system, comprising: a communicative goal generator that generates communicative goals based on a first communication received from the user, the generated communicative goals being related to information needed to be obtained from the user; and a sentence planning unit that automatically plans one or more sentences based on the communicative goals generated by the communicative goal generator, wherein at least one of the one or more planned sentences are output to the user.
2. The system of claim 1 , further comprising: a sentence plan generator that generates a plurality of sentence plans based on the generated communicative goals; a sentence plan ranker that ranks the plurality of sentence plans generated by the sentence planning generator and selects the highest ranked sentence plan to be output to the user.
3. The system of claim 2, further comprising: a realization unit that applies a set of linguistic rules to the sentence plan selected by the sentence plan ranker.
4. The system of claim 2, further comprising: a training database that includes a set of learned rules, wherein the sentence plan ranker ranks the sentence plan generated by the sentence plan generator using the set of learned rules.
5. The system of claim 2, further comprising: a discourse history database that includes interaction information related to a set of interactions between the user and the automated dialog system, wherein the sentence plan generator generates a plurality of sentence plans using the interaction information.
6. The system of claim 1 , wherein the user's first communication includes nonverbal communications.
7. The system of claim 6, wherein the nonverbal communications include at least one of gestures, body movements, head movements, non- responses, text, keyboard entries, keypad entries, mouse clicks, DTMF codes, pointers, stylus, cable set-top box entries, graphical user interface entries, and touchscreen entries.
8. The system of claim 1 , wherein the communicative goal generator generates communicative goals using recognition and understanding data from the user's communication.
9. The system of claim 1 , wherein the communicative goals generated by the communicative goal generator include confirming information previously obtained from the user.
10. The system of claim 1 , further comprising: a text-to-speech converter that converts at least one of the one or more sentence plans from text to speech.
11. The system of claim 1 , wherein the system is used in one of a customer care system, a reservation system, parts ordering system, navigation system, information gathering system, and information retrieval system.
12. An automated sentence planning system that automatically plans sentences based on communicative goals input from an automated dialog system, the communicative goals being related to information needed to be obtained from a user, comprising: a sentence plan generator that generates a plurality of sentence plans based on the communicative goals; a sentence plan ranker that ranks the sentence plans generated by the sentence plan generator and selects the highest ranked sentence plan; and a realization unit that realizes the selected sentence plan, wherein the realized sentence plan is output to the user.
13. The system of claim 12, wherein the realization unit applies a set of linguistic rules to the selected sentence plan.
14. The system of claim 12, further comprising: a training database that includes a set of learned rules, wherein sentence plan ranker ranks the generated sentence plans using the set of learned rules.
15. The system of claim 12, further comprising: a discourse history database that includes interaction information related to a set of interactions between the user and the automated dialog system, wherein the sentence plan generator generates a plurality of sentence plans using the interaction information.
16. The system of claim 12, wherein the interactions between the user and the automated dialog system include nonverbal communications.
17. The system of claim 16, wherein the nonverbal communications include at least one of gestures, body movements, head movements, non- responses, text, keyboard entries, keypad entries, mouse clicks, DTMF codes, pointers, stylus, cable set-top box entries, graphical user interface entries, and touchscreen entries.
18. The system of claim 12, wherein the communicative goals are generated using recognition and understanding data from a communication received by the automated dialog system from the user.
19. The system of claim 12, wherein the communicative goals include confirming information previously obtained from the user.
20. The system of claim 12, further comprising: a text-to-speech converter that converts the realized sentence plan from text to speech.
21. The system of claim 12, wherein the system is used in one of a customer care system, a reservation system, parts ordering system, navigation system, information gathering system, and information retrieval system.
22. An automated sentence planning system coupled to an automated dialog system that automatically plans sentences based on communicative goals related to information needed to be obtained from a user, comprising: a sentence plan generator that generates a plurality of sentence plans based on the communicative goals and information related to a set of interactions between the user and the automated dialog system; a sentence plan ranker that ranks the sentence plans generated by the sentence plan generator using a set of learned rules and selects the highest ranked sentence plan; a realization unit that realizes the selected sentence plan by applying a set of linguistic rules; and a text-to-speech converter that converts the realized sentence plan from text to speech, wherein the converted sentence plan is output to the user.
23. The system of claim 22 wherein the interactions between the user and the automated dialog system include nonverbal communications.
24. The system of claim 23, wherein the nonverbal communications include at least one of gestures, body movements, head movements, non- responses, text, keyboard entries, keypad entries, mouse clicks, DTMF codes, pointers, stylus, cable set-top box entries, graphical user interface entries, and touchscreen entries.
25. The system of claim 22, further comprising: a communicative goal generator that generates the communicative goals using recognition and understanding data from a communication received by the automated dialog system from the user.
26. The system of claim 22, wherein the communicative goals include confirming information previously obtained from the user.
27. The system of claim 22, wherein the system is used in one of a customer care system, a reservation system, parts ordering system, navigation system, information gathering system, and information retrieval system.
PCT/US2002/007235 2001-03-14 2002-03-11 A trainable sentence planning system WO2002073453A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/258,776 US7729918B2 (en) 2001-03-14 2002-03-11 Trainable sentence planning system
US12/789,883 US8185401B2 (en) 2001-03-14 2010-05-28 Automated sentence planning in a task classification system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US27565301P 2001-03-14 2001-03-14
US60/275,653 2001-03-14

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US10/258,776 A-371-Of-International US7729918B2 (en) 2001-03-14 2002-03-11 Trainable sentence planning system
US12/789,883 Continuation US8185401B2 (en) 2001-03-14 2010-05-28 Automated sentence planning in a task classification system

Publications (2)

Publication Number Publication Date
WO2002073453A1 true WO2002073453A1 (en) 2002-09-19
WO2002073453A8 WO2002073453A8 (en) 2002-11-14

Family

ID=23053280

Family Applications (4)

Application Number Title Priority Date Filing Date
PCT/US2002/007236 WO2002073598A1 (en) 2001-03-14 2002-03-11 Method for automated sentence planning in a task classification system
PCT/US2002/007235 WO2002073453A1 (en) 2001-03-14 2002-03-11 A trainable sentence planning system
PCT/US2002/007234 WO2002073452A1 (en) 2001-03-14 2002-03-11 Method for automated sentence planning
PCT/US2002/007237 WO2002073449A1 (en) 2001-03-14 2002-03-11 Automated sentence planning in a task classification system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/US2002/007236 WO2002073598A1 (en) 2001-03-14 2002-03-11 Method for automated sentence planning in a task classification system

Family Applications After (2)

Application Number Title Priority Date Filing Date
PCT/US2002/007234 WO2002073452A1 (en) 2001-03-14 2002-03-11 Method for automated sentence planning
PCT/US2002/007237 WO2002073449A1 (en) 2001-03-14 2002-03-11 Automated sentence planning in a task classification system

Country Status (3)

Country Link
US (4) US7516076B2 (en)
CA (2) CA2408625A1 (en)
WO (4) WO2002073598A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210134292A1 (en) * 2018-09-27 2021-05-06 International Business Machines Corporation Graph based prediction for next action in conversation flow

Families Citing this family (217)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7590224B1 (en) * 1995-09-15 2009-09-15 At&T Intellectual Property, Ii, L.P. Automated task classification system
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
US20160132796A9 (en) * 2001-07-26 2016-05-12 Bernd Schneider CPW method with application in an application system
US8126713B2 (en) * 2002-04-11 2012-02-28 Shengyang Huang Conversation control system and conversation control method
US7398209B2 (en) 2002-06-03 2008-07-08 Voicebox Technologies, Inc. Systems and methods for responding to natural language speech utterance
US7693720B2 (en) * 2002-07-15 2010-04-06 Voicebox Technologies, Inc. Mobile systems and methods for responding to natural language speech utterance
CN100346625C (en) * 2002-12-27 2007-10-31 联想(北京)有限公司 Telephone voice interactive system and its realizing method
ATE495626T1 (en) * 2003-03-11 2011-01-15 Koninkl Philips Electronics Nv SCRIP-ORIENTED DIALOGUE SUPPORT FOR THE OPERATOR OF A CALL CENTER
US20080140398A1 (en) * 2004-12-29 2008-06-12 Avraham Shpigel System and a Method For Representing Unrecognized Words in Speech to Text Conversions as Syllables
US7606708B2 (en) * 2005-02-01 2009-10-20 Samsung Electronics Co., Ltd. Apparatus, method, and medium for generating grammar network for use in speech recognition and dialogue speech recognition
US7640160B2 (en) 2005-08-05 2009-12-29 Voicebox Technologies, Inc. Systems and methods for responding to natural language speech utterance
US7620549B2 (en) 2005-08-10 2009-11-17 Voicebox Technologies, Inc. System and method of supporting adaptive misrecognition in conversational speech
US7949529B2 (en) 2005-08-29 2011-05-24 Voicebox Technologies, Inc. Mobile systems and methods of supporting natural language human-machine interactions
US7634409B2 (en) 2005-08-31 2009-12-15 Voicebox Technologies, Inc. Dynamic speech sharpening
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
JP4849663B2 (en) * 2005-10-21 2012-01-11 株式会社ユニバーサルエンターテインメント Conversation control device
JP4846336B2 (en) * 2005-10-21 2011-12-28 株式会社ユニバーサルエンターテインメント Conversation control device
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8073681B2 (en) 2006-10-16 2011-12-06 Voicebox Technologies, Inc. System and method for a cooperative conversational voice user interface
US7818176B2 (en) 2007-02-06 2010-10-19 Voicebox Technologies, Inc. System and method for selecting and presenting advertisements based on natural language processing of voice-based input
TWI321313B (en) * 2007-03-03 2010-03-01 Ind Tech Res Inst Apparatus and method to reduce recognization errors through context relations among dialogue turns
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US8140335B2 (en) 2007-12-11 2012-03-20 Voicebox Technologies, Inc. System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US9305548B2 (en) 2008-05-27 2016-04-05 Voicebox Technologies Corporation System and method for an integrated, multi-modal, multi-device natural language voice services environment
US8589161B2 (en) 2008-05-27 2013-11-19 Voicebox Technologies, Inc. System and method for an integrated, multi-modal, multi-device natural language voice services environment
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
WO2010045375A1 (en) * 2008-10-14 2010-04-22 Honda Motor Co., Ltd. Improving dialog coherence using semantic features
US8326637B2 (en) 2009-02-20 2012-12-04 Voicebox Technologies, Inc. System and method for processing multi-modal device interactions in a natural language voice services environment
US10255566B2 (en) 2011-06-03 2019-04-09 Apple Inc. Generating and processing task items that represent tasks to perform
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US9171541B2 (en) 2009-11-10 2015-10-27 Voicebox Technologies Corporation System and method for hybrid processing in a natural language voice services environment
US9502025B2 (en) 2009-11-10 2016-11-22 Voicebox Technologies Corporation System and method for providing a natural language content dedication service
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US9760566B2 (en) 2011-03-31 2017-09-12 Microsoft Technology Licensing, Llc Augmented conversational understanding agent to identify conversation context between two humans and taking an agent action thereof
US9842168B2 (en) 2011-03-31 2017-12-12 Microsoft Technology Licensing, Llc Task driven user intents
US10642934B2 (en) 2011-03-31 2020-05-05 Microsoft Technology Licensing, Llc Augmented conversational understanding architecture
US9858343B2 (en) 2011-03-31 2018-01-02 Microsoft Technology Licensing Llc Personalization of queries, conversations, and searches
US9244984B2 (en) 2011-03-31 2016-01-26 Microsoft Technology Licensing, Llc Location based conversational understanding
US9454962B2 (en) * 2011-05-12 2016-09-27 Microsoft Technology Licensing, Llc Sentence simplification for spoken language understanding
US9064006B2 (en) 2012-08-23 2015-06-23 Microsoft Technology Licensing, Llc Translating natural language utterances to keyword search queries
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US8994660B2 (en) 2011-08-29 2015-03-31 Apple Inc. Text correction processing
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US8762134B2 (en) 2012-08-30 2014-06-24 Arria Data2Text Limited Method and apparatus for situational analysis text generation
US9355093B2 (en) 2012-08-30 2016-05-31 Arria Data2Text Limited Method and apparatus for referring expression generation
US9135244B2 (en) 2012-08-30 2015-09-15 Arria Data2Text Limited Method and apparatus for configurable microplanning
US9405448B2 (en) 2012-08-30 2016-08-02 Arria Data2Text Limited Method and apparatus for annotating a graphical output
US9336193B2 (en) 2012-08-30 2016-05-10 Arria Data2Text Limited Method and apparatus for updating a previously generated text
US8762133B2 (en) 2012-08-30 2014-06-24 Arria Data2Text Limited Method and apparatus for alert validation
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
US9600471B2 (en) 2012-11-02 2017-03-21 Arria Data2Text Limited Method and apparatus for aggregating with information generalization
WO2014076525A1 (en) 2012-11-16 2014-05-22 Data2Text Limited Method and apparatus for expressing time in an output text
US20140143666A1 (en) * 2012-11-16 2014-05-22 Sean P. Kennedy System And Method For Effectively Implementing A Personal Assistant In An Electronic Network
WO2014076524A1 (en) 2012-11-16 2014-05-22 Data2Text Limited Method and apparatus for spatial descriptions in an output text
US9990360B2 (en) 2012-12-27 2018-06-05 Arria Data2Text Limited Method and apparatus for motion description
WO2014102568A1 (en) 2012-12-27 2014-07-03 Arria Data2Text Limited Method and apparatus for motion detection
US10776561B2 (en) 2013-01-15 2020-09-15 Arria Data2Text Limited Method and apparatus for generating a linguistic representation of raw input data
KR102516577B1 (en) 2013-02-07 2023-04-03 애플 인크. Voice trigger for a digital assistant
US9569425B2 (en) 2013-03-01 2017-02-14 The Software Shop, Inc. Systems and methods for improving the efficiency of syntactic and semantic analysis in automated processes for natural language understanding using traveling features
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
AU2014233517B2 (en) * 2013-03-15 2017-05-25 Apple Inc. Training an at least partial voice command system
US10748529B1 (en) 2013-03-15 2020-08-18 Apple Inc. Voice activated device for use with a voice-based digital assistant
WO2014197336A1 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
CN110442699A (en) 2013-06-09 2019-11-12 苹果公司 Operate method, computer-readable medium, electronic equipment and the system of digital assistants
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
CN104252469B (en) 2013-06-27 2017-10-20 国际商业机器公司 Method, equipment and circuit for pattern match
US9946711B2 (en) 2013-08-29 2018-04-17 Arria Data2Text Limited Text generation from correlated alerts
US9396181B1 (en) 2013-09-16 2016-07-19 Arria Data2Text Limited Method, apparatus, and computer program product for user-directed reporting
US9244894B1 (en) 2013-09-16 2016-01-26 Arria Data2Text Limited Method and apparatus for interactive reports
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
WO2015159133A1 (en) 2014-04-18 2015-10-22 Arria Data2Text Limited Method and apparatus for document planning
US9483763B2 (en) 2014-05-29 2016-11-01 Apple Inc. User interface for payments
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
AU2015266863B2 (en) 2014-05-30 2018-03-15 Apple Inc. Multi-command single utterance input method
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
WO2016036552A1 (en) 2014-09-02 2016-03-10 Apple Inc. User interactions for a mapping application
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9626703B2 (en) 2014-09-16 2017-04-18 Voicebox Technologies Corporation Voice commerce
US9898459B2 (en) 2014-09-16 2018-02-20 Voicebox Technologies Corporation Integration of domain information into state transitions of a finite state transducer for natural language processing
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
EP3207467A4 (en) 2014-10-15 2018-05-23 VoiceBox Technologies Corporation System and method for providing follow-up responses to prior natural language inputs of a user
US10431214B2 (en) 2014-11-26 2019-10-01 Voicebox Technologies Corporation System and method of determining a domain and/or an action related to a natural language input
US10614799B2 (en) 2014-11-26 2020-04-07 Voicebox Technologies Corporation System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance
US9574896B2 (en) 2015-02-13 2017-02-21 Apple Inc. Navigation user interface
US10152299B2 (en) 2015-03-06 2018-12-11 Apple Inc. Reducing response latency of intelligent automated assistants
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10460227B2 (en) 2015-05-15 2019-10-29 Apple Inc. Virtual assistant in a communication session
US10200824B2 (en) 2015-05-27 2019-02-05 Apple Inc. Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US20160358133A1 (en) 2015-06-05 2016-12-08 Apple Inc. User interface for loyalty accounts and private label accounts for a wearable device
US9940637B2 (en) 2015-06-05 2018-04-10 Apple Inc. User interface for loyalty accounts and private label accounts
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US20160378747A1 (en) 2015-06-29 2016-12-29 Apple Inc. Virtual assistant for media playback
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10740384B2 (en) 2015-09-08 2020-08-11 Apple Inc. Intelligent automated assistant for media search and playback
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10331312B2 (en) 2015-09-08 2019-06-25 Apple Inc. Intelligent automated assistant in a media environment
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10956666B2 (en) 2015-11-09 2021-03-23 Apple Inc. Unconventional virtual assistant interactions
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179588B1 (en) 2016-06-09 2019-02-22 Apple Inc. Intelligent automated assistant in a home environment
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
US10621581B2 (en) 2016-06-11 2020-04-14 Apple Inc. User interface for transactions
US10331784B2 (en) 2016-07-29 2019-06-25 Voicebox Technologies Corporation System and method of disambiguating natural language processing requests
US10445432B1 (en) 2016-08-31 2019-10-15 Arria Data2Text Limited Method and apparatus for lightweight multilingual natural language realizer
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10467347B1 (en) 2016-10-31 2019-11-05 Arria Data2Text Limited Method and apparatus for natural language document orchestrator
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US11237635B2 (en) 2017-04-26 2022-02-01 Cognixion Nonverbal multi-input and feedback devices for user intended computer control and communication of text, graphics and audio
US11402909B2 (en) 2017-04-26 2022-08-02 Cognixion Brain computer interface for augmented reality
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
DK201770383A1 (en) 2017-05-09 2018-12-14 Apple Inc. User interface for correcting recognition errors
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
DK180048B1 (en) 2017-05-11 2020-02-04 Apple Inc. MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK201770428A1 (en) 2017-05-12 2019-02-18 Apple Inc. Low-latency intelligent automated assistant
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US20180336892A1 (en) 2017-05-16 2018-11-22 Apple Inc. Detecting a trigger of a digital assistant
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US20180336275A1 (en) 2017-05-16 2018-11-22 Apple Inc. Intelligent automated assistant for media exploration
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. Far-field extension for digital assistant services
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10991369B1 (en) * 2018-01-31 2021-04-27 Progress Software Corporation Cognitive flow
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
DK179822B1 (en) 2018-06-01 2019-07-12 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. Virtual assistant operation in multi-device environments
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
US10504518B1 (en) 2018-06-03 2019-12-10 Apple Inc. Accelerated task performance
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
US11217251B2 (en) 2019-05-06 2022-01-04 Apple Inc. Spoken notifications
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
DK180129B1 (en) 2019-05-31 2020-06-02 Apple Inc. User activity shortcut suggestions
DK201970510A1 (en) 2019-05-31 2021-02-11 Apple Inc Voice identification in digital assistant systems
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
US11227599B2 (en) 2019-06-01 2022-01-18 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
WO2021056255A1 (en) 2019-09-25 2021-04-01 Apple Inc. Text detection using global geometry estimators
WO2021071115A1 (en) * 2019-10-07 2021-04-15 Samsung Electronics Co., Ltd. Electronic device for processing user utterance and method of operating same
US11061543B1 (en) 2020-05-11 2021-07-13 Apple Inc. Providing relevant data items based on context
US11038934B1 (en) 2020-05-11 2021-06-15 Apple Inc. Digital assistant hardware abstraction
US11755276B2 (en) 2020-05-12 2023-09-12 Apple Inc. Reducing description length based on confidence
US11490204B2 (en) 2020-07-20 2022-11-01 Apple Inc. Multi-device audio adjustment coordination
US11438683B2 (en) 2020-07-21 2022-09-06 Apple Inc. User identification using headphones
US11562139B2 (en) 2020-11-23 2023-01-24 International Business Machines Corporation Text data protection against automated analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5357596A (en) * 1991-11-18 1994-10-18 Kabushiki Kaisha Toshiba Speech dialogue system for facilitating improved human-computer interaction
US5675707A (en) * 1995-09-15 1997-10-07 At&T Automated call router system and method
US5860063A (en) * 1997-07-11 1999-01-12 At&T Corp Automated meaningful phrase clustering
US6035267A (en) * 1996-09-26 2000-03-07 Mitsubishi Denki Kabushiki Kaisha Interactive processing apparatus having natural language interfacing capability, utilizing goal frames, and judging action feasibility

Family Cites Families (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58130396A (en) * 1982-01-29 1983-08-03 株式会社東芝 Voice recognition equipment
JP2963463B2 (en) * 1989-05-18 1999-10-18 株式会社リコー Interactive language analyzer
JPH06259090A (en) * 1993-03-09 1994-09-16 Nec Corp Voice interactive system
US5694558A (en) * 1994-04-22 1997-12-02 U S West Technologies, Inc. Method and system for interactive object-oriented dialogue management
JPH09510803A (en) * 1995-01-18 1997-10-28 フィリップス エレクトロニクス ネムローゼ フェンノートシャップ Method and apparatus for providing a human-machine dialog that can be assisted by operator intervention
US5574726A (en) * 1995-01-30 1996-11-12 Level One Communications, Inc. Inter-repeater backplane
US5794193A (en) * 1995-09-15 1998-08-11 Lucent Technologies Inc. Automated phrase generation
US6192110B1 (en) * 1995-09-15 2001-02-20 At&T Corp. Method and apparatus for generating sematically consistent inputs to a dialog manager
US5580063A (en) * 1996-01-17 1996-12-03 Birchwood Laboratories Inc. Reusable projectile impact reflecting target for day or night use
JPH11506239A (en) * 1996-03-05 1999-06-02 フィリップス エレクトロニクス ネムローゼ フェンノートシャップ Transaction system
DE19610848A1 (en) * 1996-03-19 1997-09-25 Siemens Ag Computer unit for speech recognition and method for computer-aided mapping of a digitized speech signal onto phonemes
US5754726A (en) * 1996-05-31 1998-05-19 Motorola, Inc. Optical device assembly used for fastening to pc boards
US6275788B1 (en) * 1996-09-26 2001-08-14 Mitsubishi Denki Kabushiki Kaisha Interactive processing apparatus having natural language interfacing capability, utilizing goal frames, and judging action feasibility
US6029085A (en) * 1997-04-09 2000-02-22 Survivalink Corporation Charging and safety control for automated external defibrillator and method
US6567787B1 (en) * 1998-08-17 2003-05-20 Walker Digital, Llc Method and apparatus for determining whether a verbal message was spoken during a transaction at a point-of-sale terminal
US5999904A (en) * 1997-07-02 1999-12-07 Lucent Technologies Inc. Tracking initiative in collaborative dialogue interactions
US6044347A (en) * 1997-08-05 2000-03-28 Lucent Technologies Inc. Methods and apparatus object-oriented rule-based dialogue management
US6044337A (en) * 1997-10-29 2000-03-28 At&T Corp Selection of superwords based on criteria relevant to both speech recognition and understanding
JP3125746B2 (en) * 1998-05-27 2001-01-22 日本電気株式会社 PERSON INTERACTIVE DEVICE AND RECORDING MEDIUM RECORDING PERSON INTERACTIVE PROGRAM
US6199044B1 (en) * 1998-05-27 2001-03-06 Intermec Ip Corp. Universal data input and processing device, such as universal point-of-sale device for inputting and processing bar code symbols, document images, and other data
US6219643B1 (en) * 1998-06-26 2001-04-17 Nuance Communications, Inc. Method of analyzing dialogs in a natural language speech recognition system
KR20010079555A (en) 1998-07-24 2001-08-22 비센트 비.인그라시아, 알크 엠 아헨 Markup language for interactive services and methods thereof
US6539359B1 (en) * 1998-10-02 2003-03-25 Motorola, Inc. Markup language for interactive services and methods thereof
JP2002527829A (en) 1998-10-09 2002-08-27 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Automatic inquiry method and system
US6570555B1 (en) * 1998-12-30 2003-05-27 Fuji Xerox Co., Ltd. Method and apparatus for embodied conversational characters with multimodal input/output in an interface device
US6631346B1 (en) * 1999-04-07 2003-10-07 Matsushita Electric Industrial Co., Ltd. Method and apparatus for natural language parsing using multiple passes and tags
US6233561B1 (en) 1999-04-12 2001-05-15 Matsushita Electric Industrial Co., Ltd. Method for goal-oriented speech translation in hand-held devices using meaning extraction and dialogue
US6314402B1 (en) * 1999-04-23 2001-11-06 Nuance Communications Method and apparatus for creating modifiable and combinable speech objects for acquiring information from a speaker in an interactive voice response system
US6356869B1 (en) * 1999-04-30 2002-03-12 Nortel Networks Limited Method and apparatus for discourse management
US6385584B1 (en) * 1999-04-30 2002-05-07 Verizon Services Corp. Providing automated voice responses with variable user prompting
JP4825999B2 (en) * 1999-05-14 2011-11-30 ソニー株式会社 Semiconductor memory device and manufacturing method thereof
US6711585B1 (en) * 1999-06-15 2004-03-23 Kanisa Inc. System and method for implementing a knowledge management system
US6418440B1 (en) * 1999-06-15 2002-07-09 Lucent Technologies, Inc. System and method for performing automated dynamic dialogue generation
JP2001005488A (en) * 1999-06-18 2001-01-12 Mitsubishi Electric Corp Voice interactive system
US6324512B1 (en) * 1999-08-26 2001-11-27 Matsushita Electric Industrial Co., Ltd. System and method for allowing family members to access TV contents and program media recorder over telephone or internet
US6553345B1 (en) * 1999-08-26 2003-04-22 Matsushita Electric Industrial Co., Ltd. Universal remote control allowing natural language modality for television and multimedia searches and requests
US6510411B1 (en) * 1999-10-29 2003-01-21 Unisys Corporation Task oriented dialog model and manager
GB9926134D0 (en) * 1999-11-05 2000-01-12 Ibm Interactive voice response system
US6526382B1 (en) * 1999-12-07 2003-02-25 Comverse, Inc. Language-oriented user interfaces for voice activated services
GB9929284D0 (en) * 1999-12-11 2000-02-02 Ibm Voice processing apparatus
US6507643B1 (en) * 2000-03-16 2003-01-14 Breveon Incorporated Speech recognition system and method for converting voice mail messages to electronic mail messages
US7158935B1 (en) * 2000-11-15 2007-01-02 At&T Corp. Method and system for predicting problematic situations in a automated dialog
GB2372864B (en) * 2001-02-28 2005-09-07 Vox Generation Ltd Spoken language interface
US7003445B2 (en) * 2001-07-20 2006-02-21 Microsoft Corporation Statistically driven sentence realizing method and apparatus
US20030216923A1 (en) * 2002-05-15 2003-11-20 Gilmore Jeffrey A. Dynamic content generation for voice messages
US7043435B2 (en) * 2004-09-16 2006-05-09 Sbc Knowledgfe Ventures, L.P. System and method for optimizing prompts for speech-enabled applications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5357596A (en) * 1991-11-18 1994-10-18 Kabushiki Kaisha Toshiba Speech dialogue system for facilitating improved human-computer interaction
US5675707A (en) * 1995-09-15 1997-10-07 At&T Automated call router system and method
US6035267A (en) * 1996-09-26 2000-03-07 Mitsubishi Denki Kabushiki Kaisha Interactive processing apparatus having natural language interfacing capability, utilizing goal frames, and judging action feasibility
US5860063A (en) * 1997-07-11 1999-01-12 At&T Corp Automated meaningful phrase clustering

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210134292A1 (en) * 2018-09-27 2021-05-06 International Business Machines Corporation Graph based prediction for next action in conversation flow
US11600276B2 (en) * 2018-09-27 2023-03-07 International Business Machines Corporation Graph based prediction for next action in conversation flow

Also Published As

Publication number Publication date
US8620669B2 (en) 2013-12-31
WO2002073452A1 (en) 2002-09-19
US20130041654A1 (en) 2013-02-14
US20110320190A1 (en) 2011-12-29
US8180647B2 (en) 2012-05-15
US20090222267A1 (en) 2009-09-03
US20030110037A1 (en) 2003-06-12
WO2002073598A1 (en) 2002-09-19
WO2002073452A8 (en) 2002-11-14
WO2002073449A1 (en) 2002-09-19
US8019610B2 (en) 2011-09-13
WO2002073453A8 (en) 2002-11-14
CA2408625A1 (en) 2002-09-19
CA2408624A1 (en) 2002-09-19
US7516076B2 (en) 2009-04-07

Similar Documents

Publication Publication Date Title
US8180647B2 (en) Automated sentence planning in a task classification system
US8185401B2 (en) Automated sentence planning in a task classification system
US7949537B2 (en) Method for automated sentence planning in a task classification system
US20030115062A1 (en) Method for automated sentence planning
Schatzmann et al. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies
Chu-Carroll MIMIC: An adaptive mixed initiative spoken dialogue system for information queries
McTear Spoken dialogue technology: enabling the conversational user interface
JP4267081B2 (en) Pattern recognition registration in distributed systems
US6785651B1 (en) Method and apparatus for performing plan-based dialog
JP3454897B2 (en) Spoken dialogue system
Souvignier et al. The thoughtful elephant: Strategies for spoken dialog systems
JP4186992B2 (en) Response generating apparatus, method, and program
CN111128175B (en) Spoken language dialogue management method and system
López-Cózar et al. Using knowledge of misunderstandings to increase the robustness of spoken dialogue systems
Lin et al. The design of a multi-domain mandarin Chinese spoken dialogue system
López-Cózar et al. Testing dialogue systems by means of automatic generation of conversations
Filisko Developing attribute acquisition strategies in spoken dialogue systems via user simulation
Lamont Applications of Memory-Based learning to spoken dialogue system development
Carrión On the development of Adaptive and Portable Spoken Dialogue Systems: Emotion Recognition, Language Adaptation and Field Evaluation
Meng MULTILINGUAL DIALOG SYSTEMS
McTear et al. Research Topics in Spoken Dialogue Technology
Mahood A lexical access mechanism for continuous speech understanding
Pittermann et al. Conclusion and Future Directions
Putze Spoken Dialogue Systems
Taddei et al. Second showcase documentation

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

WWE Wipo information: entry into national phase

Ref document number: 10258776

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: C1

Designated state(s): CA US

AL Designated countries for regional patents

Kind code of ref document: C1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

WR Later publication of a revised version of an international search report
122 Ep: pct application non-entry in european phase