|Publication number||US6400806 B1|
|Application number||US 09/286,194|
|Publication date||4 Jun 2002|
|Filing date||5 Apr 1999|
|Priority date||14 Nov 1996|
|Also published as||US5915001, US6885736, US20020080927, WO1998021872A1|
|Publication number||09286194, 286194, US 6400806 B1, US 6400806B1, US-B1-6400806, US6400806 B1, US6400806B1|
|Inventors||Premkumar V. Uppaluru|
|Original Assignee||Vois Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (83), Non-Patent Citations (4), Referenced by (150), Classifications (8), Legal Events (6)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This appln is a con't on Ser. No. 08/748,943 filed Nov. 14, 1996.
1. Field of the Invention
This invention relates generally to the construction and use of distributed interactive voice and speech processing systems, including interactive voice response (IVR) systems and voice messaging (VM) systems. More particularly, the invention relates to form based publishing of voice information and the use of universally accessible personal profiles for authentication of the user by voice signatures and generating context sensitive active vocabularies to improve speaker dependent speech recognition. The invention also relates to the use of the user attributes and preferences stored in universally accessible personal profiles to improve the efficiency of navigation and search as well as efficacy of search results pertaining to user queries.
2. Description of the Related Art
Conventional interactive voice response (IVR) systems allow a user to place a telephone call into a system, navigate (generally using touch tone input) through a hierarchy of options in response to voice prompts and retrieve information stored in a computer database. Airlines, banks, credit companies and many other service organizations are just a few examples of the types of businesses using IVR systems to allow a customer (or prospective customer) to retrieve desired information. These conventional systems are generally organization-specific in that they offer access to a single database or set of databases related to the goods, services or other aspects of the organization maintaining the IVR system. Thus, conventional IVR technology is used to offer access to information specific to a single organization (i.e. a specific airline, bank or credit company). For example airlines typically use IVR to allow callers to access flight arrival and departure information or to select reservation options, for the particular airline only.
It is desirable to provide an IVR system that enables access to an aggregation of databases and services rather than a single database and service. One barrier to the provision of aggregated services in an IVR system is that conventional IVR systems do not have a distributed information publishing means. Conventional IVR systems do not have a mechanism for service/information providers to readily access the IVR system and add updated or entirely new information for publication on the IVR system.
Further, conventional IVR systems are generally configured for uniform access by any caller admitted to the IVR system. Each caller is handled by the system in the same manner and offered an identical set of options. One reason that IVR systems use uniform user interfaces for each caller rather than caller-specific configurations is that conventional IVR systems operate in “closed” computer environments hosting the particular IVR system. Thus, when a caller accesses a conventional IVR system, the only caller-specific information which the system has at its disposal, is any information previously provided by the caller which the system has maintained or any information that is provided by the caller during the IVR session (i.e. when a user enters an account number using touch tone telephone input). Because, however, collecting and storing caller-specific information with conventional technology is cumbersome and time consuming, most IVR systems do not offer caller-specific (caller customized) features.
There are numerous applications in which it is desirable for an IVR system to use caller-specific information in handling a call. Caller-specific information in the form of user preferences can aid in minimizing the size of a command tree which the user must navigate to access desired information. Additionally, caller specific information could also be used to authenticate the identity of a user in cases where security is an issue (i.e. in bank and credit contexts). Further, caller-specific speech training profiles could be used to implement speaker dependent speech recognition to allow for a caller to use voice commands in place of touch-tone commands. Still further, an IVR system having access to caller-specific data could be used to apply IVR technology in new application areas such as personal productivity.
Thus, there is a need for an improved voice and speech processing system that provides universal access to caller-specific information to provide user-customized IVR systems. Further, there is a need to provide universal access to voice and speech files in order to allow widespread use of such files for caller authentication and for performing speaker dependent speech recognition in IVR systems.
The system and method of the present invention extends World Wide Web (referred to herein as “www” or the “web”) and Internet technology to provide universally accessible caller-specific profiles that are accessed by one or more IVR systems. The invention features a set of web pages containing information (components) formatted using MIME and hypertext markup language (HTML) standards with extensions for voice information access and navigation. These web pages are linked using HTML hyper-links that are accessible to users via voice commands and touch-tone inputs. These web pages and components in them are addressable using HTML anchors and links embedding HTML universal (uniform) resource locators (URLs) rendering them universally accessible over the Internet. This collection of connected web pages are referred to herein as the “voice web” and the individual pages are referred to herein as “voice web pages”. Each web page in the voice web contains a specially tagged set of key words and touch tone sequences that are associated with embedded anchors and links used for navigation within the web.
In addition, the invention features a set of linked HTML pages representing the user's “personal profile”. The personal profile contains user's attributes and preferences. Attributes include user's name, address, phone number, personal identification code, voice imprints for authentication, speech training profile and other information. Preferences include, configuration preferences such as personal greetings and gender and language selection, selection preferences such as bookmarks and favorite places and presentation preferences such as priority ordering, default overrides and preferred vocabulary.
The personal profile is designed for component access within web pages allowing easy extraction of context sensitive profile information. In particular, speech training profiles (included as a user attribute and which contain word patterns representing speaker dependent training information) partitioned into sets of related words likely to occur in combination within corresponding voice web pages. A set of command and control words such as “play, pause, continue, previous, next, home, reload, help, etc.” are stored in a top level component set enabling user dependent but context independent navigation and control. Other component sets are designed to match the key word sets in corresponding voice web pages such as a calendar page or an address book page enabling user and context dependent navigation and control.
When a user calls into the distributed voice and speech processing system associated with the voice web, the system first identifies the user utilizing a unique account number (such as phone number or social security number). Next, it accesses the user's personal profile using the corresponding URL and retrieves the user attributes and preferences related to authentication and security. Using this personal profile information, the voice web system authenticates the identity of the user using a combination of personal identification code based password checking and voice imprint matching. The voice imprint is any sufficiently long utterance or phrase that the user has previously entered into his/her profile. Each user's voice imprint is analyzed and stored in the profile for quick matching on demand with a real-time provided user sample. The combination of every individual's unique vocal characteristics stored in the voice imprint coupled with the random choice of the password phrase ensures a high degree of security and authentication.
Once authenticated, the user is allowed to navigate and access more information from the voice web using voice commands. In order to effectively accomplish this task, the voice web system retrieves the context independent command and control key word set from the user's speech profile.
The voice web system then presents a top level voice web personal home page for user's perusal. At the same time, it retrieves the set of word recognition patterns associated with the key words in the presented page from the user's speech profile. Thus, the system is able to match the active vocabulary and associated speaker dependent word patterns dynamically in a context sensitive manner. The process continues as the user navigates from page to page. The voice web system dynamically retrieves the suitable subset of training word patterns from the user's speech profile matching the voice navigation key words in the page being presented to the user.
The process described above greatly reduces the size of the training information that needs to be retrieved at any time while significantly enhancing accuracy of speech recognition using speaker dependent training profiles. Since the speech profile is constructed using HTML pages and components, it is universally accessible using its URL. This enables the user to call into any compatible Internet connected voice web system in user's proximity from anywhere in the world, identify himself/herself to the system and then enable the system to dynamically retrieve suitable information that enhances his/her navigation and access of the information stored in the voice web using voice commands and input.
In addition to the user attribute information discussed above, the personal profile contains user preferences relative to configuration, presentation and information selection. These preferences are components within the personal profile pages and are easily available to the voice web system for dynamic retrieval. For example, if the user requests his/her stock portfolio from the voice web, it first retrieves the user's preferred portfolio of companies from his/her profile and applies this list to limit the search on stock quotes from all companies. The user gets exactly the information relevant to his/her interest in exactly the order of priority he/she prefers.
FIG. 1 is a functional block diagram of a voice web system in accordance with the present invention.
FIG. 2A is a functional block diagram of the voice web system shown in FIG. 1 configured to provide voice web services.
FIG. 2B is a functional block diagram of an exemplary calendar service.
FIG. 2C is a functional block diagram of an alternative configuration of a voice web system in accordance with the present invention.
FIG. 3 illustrates personal voice web used to provide personal services using the system shown in FIG. 2A.
FIG. 4 illustrates a hierarchy of speech training pages that correspond to the service pages shown in FIG. 3.
FIG. 5 illustrates a hierarchy of attributes and preferences pages that correspond to the service pages shown in FIG. 3.
FIG. 6 is a flow diagram of a subscriber authentication method used in the delivery of the personal voice web services shown in FIG. 3
FIG. 7 is a flow diagram of an enhanced speech recognition processes used in personal voice web systems shown in FIG. 3.
FIG. 8 is a flow diagram of a query customization process in accordance with the present invention.
FIG. 9 is a flow diagram of a voice publishing method in accordance with the present invention.
FIG. 10 is a system diagram of a business-yellow-order page system in accordance with the present invention.
The figures depict a preferred embodiment of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
FIG. 1 is a functional block diagram of a voice web system 100 in accordance with the present invention. Voice web system 100 extends the conventional internet and world wide web (“web” or www) technology to voice and speech processing applications and also enables new uses for interactive voice response (IVR) technology. Voice web system 100 includes one or more voice web sites 102 coupled to one or more voice web gateways 105 via the Internet 101. Voice web sites 102 and voice web gateways 105 transfer files over Internet 101 in accordance with hypertext transport protocol (HTTP). A subscriber 107 accesses the voice web system 100 by coupling to the gateway 105 using a telephone 111 coupled to the public switched telephone network (PSTN) 109.
Internet 101 is a system of linked communications networks that facilitate communication among computers which are coupled to internet 101. Generally, internets such as Internet 101 facilitate communication by providing file transfer, electronic mail and news group services. Internet 101 is preferably the Internet which evolved from the ARPANET and which is publicly accessible world wide. It should be understood however, that the principles of the present invention apply to other internets and even closed (private) networks such as corporate intranets.
It should be noted that system 100 may include numerous voice web sites 102 and numerous voice web gateways 105. A single voice web site 102 and a single voice web gateway 105 are shown in FIG. 1, however, to keep the figure uncluttered. Thus, voice web system 100 is a collection of voice web gateways 105 and voice web sites 102 connected over internet 101 enabling subscribers 107 to access voice web pages 103 via their telephones as shown in FIG. 1.
A voice web page 103 is web page specified using a navigable markup language that includes voice extensions. A navigable markup language is an enhanced type of markup language that facilitates publication navigation and access of information stored in documents specified in the navigable markup language. An exemplary markup language is the Hypertext Markup Language 2.0, RFC1866, HTML working group of Internet Engineering Task Force, Sep. 22, 1995, edited by D. Connolly published on the www at the following uniform resource locator (URL) address: http://w3.org/pub/www/Markup/html-spec.
A markup language is a language that includes a set of conventions for marking portions of a document so that, when accessed by a parsing program such as a web browser, each marked portion is presented to a user with a distinctive format. In contrast to formatting codes used by word processing programs, markup language codes, called tags, do not specify exactly how the tagged portion should be presented. Instead the tags inform the web browser (parser) that the information is in a certain portion of a document such as title, heading, form or text and the like. The web browser (parser) determines how to present the tagged information.
A navigable markup language is an enhanced markup language that uses tags that are anchors and that are links. When these link and anchor tags are invoked, a user is then presented another navigable markup language document in accordance with the link and anchor tags. This link is sometimes called a hyperlink. A hyperlink is a reference to another markup language document which when invoked facilitates access of the referenced markup language document.
A navigable markup language thus uses attributes, tags and values that enable (i) a publisher to specify the presentation of information to a user; (ii) a user to interactively access the stored information; and (iii) a user to access other navigable markup language documents using hyperlinks.
The navigable markup language used to specify voice web pages 103 is HyperVoice Markup Language (HVML). HVML is a version of HTML that includes voice extensions as described in Appendix A, incorporated herein by reference. Voice web pages 103 include HVML tags and attributes that extend HTML to facilitate publication, navigation and access to voice information. For example, HVML specifies functions and protocols that facilitate voice and speech processing including voice authentication, speaker dependent speech recognition, voice information publishing (e.g. creating a voice form) and voice navigation.
Just as conventional web documents are displayed for the user, voice web documents 103 are “played” to a subscriber over a telephone. A voice web page 103 is played (by voice web browser 106) by sequentially presenting the embedded voice components according to the HVML and MIME specifications.
While a conventional web site enables on-demand access over an internet to conventional web pages, voice web site 102 enables on demand access to voice web pages 103. Voice web site 102 is a computer that hosts voice web pages 103 and serves them up to other computers (i.e. voice web gateway 105). More specifically, voice web server 102 is a computer configured with conventional web server software 112 and which has access to stored voice web pages 103. A voice web site 104 additionally optionally includes a subscriber directory 104 that stores a list of registered system subscribers. Voice web site 102 stores, serves and manages voice web pages 103 and can execute associated external scripts or programs in accordance with the present invention. These external scripts and programs interface with databases and other information sources both internal and external to web site 102.
Voice web gateway 105 is a computer connected to the internet 101. Voice web gateway 105 also includes a conventional voice telecommunications interface 114 for coupling to the public switched telephone network (PSTN) 109 for telephonic communications with a subscriber 107. Telephone 111 is any voice enabling telecommunications device. Exemplary telephones include conventional desktop telephones, portable telephones, cellular telephones, analog telephones, digital telephones, smart phones and a computer configured to operate as a telephone and perform telephonic functions. Thus voice web pages 103 are universally accessible from any ordinary telephone 111. Alternatively, a subscriber 107 may access voice web pages 103 either by using a subscriber interface local to voice web gateway 105 (i.e. a direct user interface with voice web gateway 105) or by dialing into voice web gateway 105 using another computer such as a personal digital assistant or a smart phone.
Voice telecommunications interface 114 serves as an interface between a voice web browser 106 and telephone 111 and preferably includes conventional telephony and voice processing hardware and software enabling voice web gateway 105 to receive and answer telephone calls, respond to touch tone and voice commands, route and conference calls, play voice prompts and record voice messages.
Voice web gateway 105 additionally hosts a voice web browser 106. Voice web browser 106 is a computer program capable of accessing and processing voice web pages 103 in response to a request placed by subscriber 107. More specifically, voice web browser 106 (i) processes voice and touch tone activated subscriber commands, (ii) retrieves requested voice web pages 103 from the appropriate voice web site 102, (iii) interprets the embedded markup language (HVML) in the retrieved voice web page 103 and (iv) delivers the contents of a voice web page 103 to a subscriber 107 over the telephone 111. In performing the above-mentioned processing, voice web browser 106 executes scripts, including “voice scripts” embedded in a voice web page 103. Voice web browser 106 provides a subscriber 107 with fast, easy, convenient voice activated navigation and access to voice web pages 103.
Voice web browser 106 is a conventional web browser modified with appropriate voice information playback and recording extensions and enhancements. Appendix A includes a specification of HVML and voice web browser commands and is incorporated herein by reference.
Some voice web pages 103 contain references to scripts and programs that operate as service agents 110) to respond to subscriber requests as well as external events and carry out prescribed actions. These scripts and programs are externally stored on voice web sites 102 (for example as Common Gateway Interface (CGI) Scripts or Internet Services Application Programming Interface (ISAPI) programs). These external scripts and programs execute in the voice web server 102 environment as a service agent 110. The external scripts and programs that comprise service agents 110 are referred to by URLs embedded in an associated voice web page 103. In the case of a voice web page 103 that is a voice form, the script or program associated with the service agent executes in response to voice form submission by a subscriber 107. Service agents 110 follow standard Internet protocols such as HTTP, and conform to conventional formats such as MIME and application programming interfaces (APIs) such as CGI and ISAPI.
Conventional web pages are designed primarily for presentation on a computer color monitor and navigation by a mouse and key board. As such, graphics, images and text are the primary media types supported widely. Although, audio, video and 3-dimensional graphics extensions are becoming available, these extensions are directed primarily at computer users and not telephone users.
Voice web pages 103 consist of HTML pages that have been extended with Hyper Voice Markup Language (HVML) for easy and effective navigation and access of voice information via a voice activated device such as an ordinary telephone. Voice web pages 103 retain all the properties and behavior of conventional HTML pages such as HTML markup tags, universal identifiers (URLs), and hyper-links and can be accessed by a conventional web browser using HTTP protocols from a conventional web server. The additional markup tags are interpreted by an HVML extended web browser to enable subscribers 107 to navigate and access voice web pages 103 over the phone or similar voice activated device. Appendix A includes a specification of HVML and voice web browser commands and is incorporated herein by reference.
HVML pages web pages voice web page 103 are specially designed for presentation using an ordinary telephone 111 and navigation using touch tones and voice commands. This is in contrast to conventional multimedia web pages that may embed audio data to be presented on a multimedia personal computer using its speakers and navigated using its mouse, key board and microphone. Although, HVML voice web pages 103 can be embedded in generic multimedia web pages, thus sharing some of the information, they are designed to be presented using an ordinary phone and navigated using commands generated by touch tone signals and speech recognition.
An HVML web page (voice web page 103) is first and foremost an HTML page. Each web page 103 has a unique universal resource locator (URL) (also called uniform resource locator). A URL is a string of characters that uniquely identifies an internet resource including an identification of (i) the access protocol to be used; (ii) an indication of resource type; and an identification of its location in the computer network. For example, the following fictitious URL identifies a www document: http://www.voiscorp.com/banner.gif uniquely identifies the location of a resource on the world wide web computer network. “http://” indicates the access protocol. “www.voiscorp.com” is the domain name of the computer on which the resource is located. “banner” is the name of the resource located on the computer specified by the domain name. “gif” indicates that the banner resource is a gif (graphical interchange file) type resource. Similarly, the following fictitious URL uniquely identifies the location of a voice web page 103: http://www.voiscorp.com/voicememo.hvml. In this example, “voicememo” is the name of the resource located on the computer specified by the domain name. “hvml” indicates that the voicememo resource is an hvml type resource. Thus, web pages 103 are each uniquely identified by their corresponding URL. Once located, a web page 103 can be created, edited and played using existing web publication tools, it can be stored on any conventional web server anywhere on the Internet, it can be accessed by any conventional web browser and presented on a computer monitor, it can be navigated using the computer's mouse, keyword, and (with some additional plug-ins) microphone, and it can contain embedded anchors and hyper links to other HTML pages, including other HVML pages.
Voice web pages 103 are designed for three primary purposes: (i) presenting structured voice information to a user; (ii) enabling the user to navigate across and within voice pages; and (iii) capturing user input for information queries or submission.
a. HVML Presentation. Presentation of voice information is accomplished primarily by the voice tag. The voice tag has a type attribute which specifies the type of voice information to be presented. If the type attribute has the file value, the voice information is obtained from a voice file specified by its URL. If the type attribute has the text value, the voice information is synthesized from the specified text. If the type attribute has number, ordinal, currency, date, or character value, then the voice information is generated by concatenating voice fragments from a pre-recorded indexed system voice file. If the type attribute has the stream value, then the voice information is obtained from the voice stream specified by its URL. Composition of several voice elements into a seamless voice string is accomplished by the voice-string tag.
Combining these tags, publishers can compose and present: (i) pre-recorded voice prompts and messages; (ii) voice prompts generated using text-to-speech technology; and (iii) Pre-formatted voice prompts with dynamic speech synthesis elements.
b. HVML Navigation. Navigation of voice web pages 103 is primarily accomplished by extending the HTML anchor tag with new attributes—tone and label. These attributes are used in conjunction with the existing href attribute in an anchor element that makes the anchor into a hyper link. When the user selects the touch tone signals specified by the value of the tone attribute or utters the word specified by the label attribute, the browser invokes the corresponding hyper link. The tone and label attribute values must be unique within a page. Navigation is also accomplished by system commands such as next, previous, reload, home, bookmarks, help, fax, and history which are invoked by specific touch tone sequences or utterance of the words. Users can control the voice browser operations by issuing system commands such as stop, start, play, pause, exit, backup, and forward. Using these attributes, publishers can enable (i) touch tone command and control and link navigation; (ii) pre-defined, system and user specific, spoken command and control key word recognition; and (ii) page and user specific spoken command and control key word recognition.
c. HVML Forms. HVML uses the form tag to enable user input similar to HTML including the method attribute which specifies the way parameters are passed to the server and the action attribute which specifies the procedure to be invoked by the server to process the form. HVML extends the input tag within forms by introducing voice-input tag. Voice-input takes a type attribute similar to the input tag with three new values “voice”, “tone” and “review” in addition to the existing “reset” and “submit” values. The HVML browser pauses at each voice-input statement in a HVML form until the specified input is supplied or input is terminated, before processing the remaining form. Using these tags and attributes, publishers can enable: (i) touch tone command and control and parameter input; (ii) pre-defined, user specific, spoken alphabet and digit input; (iii) page and user specific, spoken key word and proper names input; and (iv) free form voice information input.
Syntactic and structural intelligence, such as in-line pre-recorded voice prompts, pre-formatted voice prompts with dynamically generated voice elements, key word accessible anchor elements, voice responsive hyper links etc. are embedded in voice web pages 103 through voice access extensions to HTML. Behavioral intelligence including command interpretation, page access, file caching, HVML interpretation and user interaction is embedded voice web browser 106 (the HVML browser). Voice web browser 106 has the following states: (i) waiting for user commands; (ii) active accessing and playing HVML pages; and (iii) paused for user input.
Initially, voice web browser 106 is launched upon the system's receipt of a subscriber's telephone call. Once launched, voice web browser 106 goes through an initialization sequence that includes subscriber authentication and normally becomes “active” accessing and playing the subscriber's home page. Once the home page is played, voice web browser 106 “waits” for subscriber commands. As part of playing the page, the browser may “pause” for subscriber input and continue once the input is provided.
Independent of any specific voice web page 103 that a subscriber may be accessing, voice web browser 106 provides a set of navigational and operational commands. Within the telephone key pad, “*” and “#” are special keys that generate unique tones. Voice web browser 106 has special meaning for these keys. In general, the “*” key followed by a sequence of touch tones, excluding the “#” key, signals a browser command, an escape or a skip and the “#” key signals a link activation, termination of form input, termination of a key sequence or a selection.
Voice web system 100 can be used to provide voice web services to a subscriber 107. A voice web service is a service that provides on-line telephone based access to information. The information is presented to the user through the publication of voice web pages 103. The information presented to (published for) the subscriber may be information retrieved from a single information source or a combination of information sources including publicly accessible on-line databases, information proprietary to voice web system 100, information previously stored by subscriber 107 or another information source. Exemplary services provided by voice web system 100 include (i) personal information services such as calendar, address book, electronic mail, voice mail, (ii) information services such as headline news, weather reports, sports score, stock portfolio quotes, business white pages, yellow pages, classified information and (iii) transaction services (commerce services) such as banking, bill payments, stock trading, airline hotel and restaurant reservations and catalog store orders.
Users gain access to voice web services by becoming voice web subscribers 107. Subscribers 107 preferably sign up (e.g. register) for services through a service provider. In one embodiment, each subscriber 107 is assigned a unique account number on a calling card and subscribers 107 access the voice web system 100 by dialing a single “800” (e.g. toll free) service phone number and by then supplying their account number via the telephone 111. In an alternative embodiment, the services are publicly available and any user placing a call into the system is processed as a subscriber 107 without requiring any registration.
FIG. 2A is a functional block diagram of a voice web system 200 configured to provide voice web services to a subscriber 107. Voice web system 200 includes one or more voice web gateways 105 coupled to one or more service sites 202 via internet 101. Service site 200 is a voice web site 102 configured to provide voice web services. Each voice web service is implemented using a collection of service agents 201 and service pages 203 centered around a service database 202. Additionally, service site 200 optionally includes a personal profile 204 to be used to the extent that the service being provided requires pre-stored subscriber-specific information (i.e. pre-stored information personal to the particular subscriber).
Voice web service agents 201 are a type of service agent 110 (shown in FIG. 1) that execute on service site 102 to provide voice web services to a subscriber 107. Voice web service agents 201 are therefore scripts and programs represented by a web page 103 (show in FIG. 1).
Service database 202 is a database of service information. The content of the service information varies with the type of service being provided. For example, if voice web system 100 is configured to deliver a business white page service, then service database 202 is a database of address and phone number listings for businesses. If voice web system 100 is additionally or alternatively configured to deliver news headlines, then voice web system 100 includes a service database 202 that includes current news headlines.
Service forms and pages 203 are voice web pages 103 that are HVML templates (voice forms and pages) that are “filled in” in response to a specific subscriber request. Service pages and forms 203 are used to gather subscriber input, to retrieve information and to deliver (publish) information to a subscriber. Some service pages 203 are database entry and administration forms, some are database query forms and others are database response pages. Entry forms are used to add information to the database. Query forms are used to extract information from the database. Response pages are used to present retrieved information to the user. In the preferred embodiment, service agents dynamically generate service and pages forms 203 by retrieving requested data from service database 202 and using the retrieved data in place of corresponding variables stored in an HVML template. The HVML templates link to each other specifying request-response dependencies. Thus, subscribers 107 are able to enter and retrieve information in personal and external databases over internet 101 using web protocols without having to create a voice web page for each entry in service database 202.
Service agent 201 typically uses a service database 202 and a set of service pages and forms 203 to provide the corresponding voice web service. The service database 202 hosts the information that subscribers 107 wish to access. The service forms allow subscribers 107 to input and query information in service database 202. Service pages allow service agents 201 to present the requested information to the subscriber 107 using voice web browser 106.
FIG. 2B is a functional block diagram of an exemplary calendar service. The calendar service agent 210 uses the calendar database 211 together with the calendar and appointment details input and query voice web forms 212 and appointment list and details voice web pages 213. Subscribers fill in the calendar and appointment details input voice web forms 212 to set their calendar appointments and their details. The calendar service agent 210 processes the submitted form and updates the calendar service database 211. Later, subscribers can retrieve their appointments for any day by supplying 214 the month, date and year for that day in the calendar query voice web form 212. The calendar service agent 210 processes the submitted form, retrieves the matching appointments from the calendar database, and dynamically composes and returns the appointment list voice web page 213. If the subscriber requests for the details of any appointment, the calendar service agent 210 dynamically generates and supplies the corresponding appointment details page 213.
FIG. 3 shows a personal voice web 300 in accordance with the present invention. Personal voice web 300 is standardized collection of linked voice web pages and voice web forms (a special type of voice web page) that form a personal service space for the subscriber. Preferably, all subscribers share a common structure of linked voice web pages although the contents of personal voice web pages vary from subscriber to subscribe. Because each subscriber of the personal voice web system 300 has the linked page structure shown in FIG. 3, subscribers navigate about and access information from their personal voice web 300 in a standardized way. Each page in personal voice web 300 includes an agent that performs various processing tasks required for each respective page. At the root of personal voice web 300 is the personal home page 301. Personal home page 301 links to a personal profile page 302, a personal administrative assistant page 303, a personal helpdesk page 304, and a personal commerce page 305.
The personal administrative assistant page 303 is linked to a number of personalized voice web services (service pages) 330 including, by way of an example, a calendar and appointments page 309, an address book page 310, a stock portfolio page 311, a news headlines page 312, a mail box page 313, and a business white pages home page 314.
Calendar and appointments page 309 is used to provide an appointments service. The appointments service enables a subscriber to track personal and business appointments in a voice-based calendar. The subscriber thus adds and retrieves appointments over the phone using personal voice web 300. In addition to providing day and time information related to stored appointments, a subscriber may also store voice note annotations that is associated with a particular appointment.
Address book page 310 is used to provide an address service. The address service enables a subscriber to add and retrieve address, phone number, and other information related to individual names or company names. The information added and retrieved is stored in a address book service database private to the subscriber.
Stock portfolio page 311 is used to provide a stock quote service. The stock service enables a subscriber to retrieve current stock pricing and portfolio valuation information as well as statistical information related to changes in portfolio or stock positions. The stock service uses information retrieved from a stock portfolio service database private to the subscriber and additionally retrieves current stock pricing information from an on-line data-base or information source.
News headlines page 312 is used to provide a news service. The news service enables a subscriber to retrieve news headlines related to subscriber customized topics.
Mail box page 313 is used to provide a mailbox service. The mailbox service enables a subscriber to access electronic mail (e-mail) messages. The e-mail messages are played for the subscriber using text to speech conversion and a speech synthesizer.
Business white pages home page 314 is used to provide a white page service. The white page service enables a subscriber to enter partial company name, and optionally city name and state code to retrieve the company's full name, address and phone number.
Each service page 309-314 is part of a collection of voice forms and pages that are used by the corresponding service agent to retrieve a request from the subscriber, generate an appropriate database query responsive to the subscriber-request, retrieve subscriber-requested information, and generate a voice web page that incorporates the retrieved information and that is adapted for presentation (publication) to the subscriber using a voice web browser. Thus, for example the service agent associated with calendar and appointments page 309 generates a voice form for prompting a subscriber for month, day and year information. After receiving the prompted information, calendar and appointments service agent generates the appropriate query to extract the requested calendar information from a calendar service database. Once the calendar information is retrieved from the database, the calendar and appointments service agent generates a voice web page that includes the retrieved information. The new page is then presented (published) to the subscriber over the telephone by the voice web browser.
Each of the other personal service agents associated with personal service pages 308-327 operate in a similar way to provide a subscriber with information retrieved from associated service databases.
Personal helpdesk page 304 is linked to personal voice web helpdesk service pages 331 including, by way of example, a hotels page 315, an airlines page 316, a rental cars page 317, a travel agents page 318, a restaurants page 319, a financial services page 320, and a banks page 321. The personal helpdesk page has an associated personal helpdesk agent that is used to provide a set of helpdesk services. Helpdesk services enable a subscriber to access product, pricing, availability and other information of the corresponding services.
Hotels page 315 is used to provide a hotel reservation service. Airlines page 316 is used to provide an airline booking service. Rental cars page 317 is used to provide a rental car reservation service. Travel agents page 318 is used to provide a travel service. Restaurants page 319 is used to provide a menu and reservations service. Financial services page 320 is used to provide a financial service. Bank page 321 is used to provide a bank service.
Personal commerce page 305 is linked to personal voice web commerce service pages 332 including, by way of example, an apparel shops page 322, a luggage stores page 323, a gift shops page 324, a flower shops page 325, an office supplies stores page 326, and a book stores page 327. The personal commerce page provides commerce services that enables a subscriber to access catalogs associated with various retail establishments. As part of the commerce service, the personal voice web allows a subscriber to shop in various catalogs and then submit orders for selected items directly to the sponsor of the associated catalog. Orders are submitted to the catalog sponsor either as a voice web form or conventional web form sent to the sponsor, as an electronic message or using another means.
Personal profile page 302 links to a set of personalized voice web profile pages including an authentication page 306, a speech profile page 307, and an attributes and preferences page 308.
User authentication page 306 contains authenticating information including a subscriber account number, an encrypted password or personal identification number and links to a voice authentication signature MIME resource.
Speech profile page 307 is linked to a hierarchy of speech training pages that correspond to the hierarchy of personal voice web 300. FIG. 4 shows the hierarchy 400 of speech training pages 401-427. Speech training pages 401-427 are sets of pre-captured training files to be used in performing speaker dependent speech recognition in providing the corresponding service to a subscriber. Each speech training page is thus accessed by the corresponding agent in performing the corresponding service. For example, the administrative assistant service accesses administrative speech training set 431 (including speech training pages 409-414). The helpdesk service accesses the helpdesk training page set 432 (including speech training pages 415-421). The commerce service accesses the commerce training page set 433 (including speech training pages 422-427).
Each speech training page 401-427 includes training data specifically tailored to the words more commonly associated with the corresponding service. For example, the calendar speech training page 409 includes training vocabulary to aid in the recognition of voice commands such as “Tenth”, “November”, “Tuesday” and so forth.
Referring now again to FIG. 3, personal attributes and preferences page 308 includes subscriber attribute information including name, account number, address, voice telephone number, fax telephone number, paging telephone number, encrypted credit card numbers and the like as well as personal preference information such as configuration, selection and presentation preferences. Personal attributes and preferences page 308 is also linked to hierarchy of attribute and preferences pages (shown in FIG. 5) that correspond to the hierarchy of personal voice web 300.
FIG. 5 shows the hierarchy of attributes and preferences pages 501-527 associated with personal attributes and preferences page 308. Attributes and preferences pages 501-527 are pages that store subscriber-specific preference information to be used in providing the corresponding service to a subscriber. Each attributes and preferences pages 501-527 is thus accessed by the corresponding agent in performing the corresponding service. For example, the administrative assistant service accesses attributes and preferences set 531 (including attributes and preferences pages 509-514). The helpdesk service accesses the helpdesk attributes and preferences set 532 (including attributes and preferences pages 514-521). The commerce service accesses the commerce training page set 543 (including attributes and preferences pages 522-527).
It should be noted that the user profile information for multiple subscribers is stored in user profile databases. The user profile databases are accessed by service dependent profile agents. For example, personal identification and verification information of multiple subscribers is stored in a user profile home page database (a service database) and accessed by the subscriber's profile home page agent. Calendar attributes and preferences information for multiple subscribers is stored in the subscriber calendar attributes and preferences profile database (a service database). Calendar service specific speech training information for multiple subscribers is stored in the subscriber calendar speech training profile database (a service database). Calendar service profile agent responds to HTTP form requests for calendar attributes and preferences or calendar speech training profile page information for any particular subscriber and supplies the appropriate subscriber profile page information as HVML voice web pages.
The collection of profile pages for a single user constitute that user's personal voice web profile 300. Personal Voice web profile 300 need not be a collection of static HVML pages (voice web pages), but instead be generated dynamically using user profile page databases. However, once generated, these profile pages can be reused from various cache systems within the voice web system without having to retrieve them from their original databases thus saving significant time and resources.
In operation, a personal voice web service agent uses a corresponding service profile agent to retrieve subscriber and service specific attributes and preferences, speech training profiles and other information from the corresponding service profile database. The personal voice web service agent uses the retrieved subscriber and service specific information in personalizing the voice web service forms and pages as well as in enhancing and improving speech recognition by embedding the speech training profiles in the corresponding voice web forms and pages.
Referring back to FIG. 2B, for example, the calendar service agent 210 uses a corresponding calendar service profile agent 215 to retrieve subscriber specific calendar attributes and preferences included in profile database 216 by specifying the subscriber's calendar attributes and preferences profile URL as part of a profile request web form. Calendar service profile agent 215 responds to the submitted web form, retrieves the requested subscriber information from the calendar service profile database 216 and delivers it to calendar service agent 210 as a table formatted web page. Calendar service agent 210 retrieves the requested information from the table format in the web page and uses the subscriber's attributes and preferences to customize the voice web service form and page templates 213 before presenting them to the subscriber. In this way, the subscriber can have a personalized form or page presented to him/her without having to supply information about himself/herself repeatedly in each call.
Similarly, calendar service agent 210 uses a corresponding calendar service profile agent 215 to retrieve subscriber specific calendar speech training profiles from profile database 216 by specifying the subscriber's calendar speech training profile URL as part of a profile request web form. Calendar service profile agent 215 responds to the submitted web form retrieves the requested subscriber information from the calendar service profile database 216 and delivers it to the calendar service agent 210 as a table formatted web page. The calendar service agent 210 retrieves the requested information from the table format in the web page and embeds the subscriber's speech training profiles in the voice web form and page templates (pages 212,213) before delivering them to the voice web browser. The voice web browser uses these speech training profiles to dynamically change the active vocabulary in the voice processing software and hardware thereby customizing it to the subscriber.
FIG. 2C is a functional block diagram of an alternative configuration of a voice web system in accordance with the present invention. The system includes a computer configures as a combined voice gateway and voice web site (combined site) 220. Combined site 220 includes gateway components such as a voice and telephony interface 114, a voice web browser 106 and server software 112. Combined site 220 additionally includes voice web site components such as service agents 201, service database 202 and service forms and pages 203. Combined web site 220 provides voice web access to a subscriber 107 coupling the combined site 220 via the PSTN 109. Because the voice gateway and voice web site functions are combined within a single computer environment, the server software 112 (located in combined site 220) and the voice web browser 106 exchange files without suffering the delays imposed by routing across the Internet 101. In certain applications, for example when a subscriber is accessing personal databases this configuration is advantageous to improve system performance. It should be noted, however, that even though server software 112 (located on combined site 220) and voice web browser 106 exchange files using a local interface as opposed to Internet 101, they nonetheless exchange files in accordance with HTTP.
Voice web browser 106 communicates with other web sites (such as web sites 224 and 225) using Internet 101. Web site 224 is a computer coupled to Internet 101 configured with server software 112, service agents 201, service database 202 and service forms and pages 203. Web site 224 is configured to deliver voice web services as described in reference to FIGS. 2A and 2B.
Web site 225 is a computer configured with server software 112, a profile service agent 223, service forms and pages 222 and profile database 221. Web site 225 is a universally accessible profile web site that is accessed by any other web site or web gateway in the voice web system as long as the accessing web site or web gateway has the appropriate URL information. Web site 225 provides user profile information to web site agents (such as service agents 201) located on other web sites (such as web site 224 and combined site 220). Advantageously, any web site and/or web gateway can thus access information stored in the profiles database 216 by hyperlinking to the web page associated with profile service agent 215.
Personal voice web system 300 uses a login agent as a gatekeeper to the access of each subscriber's personal voice web. The login agent is a distributed software program that can receive subscriber information over a telephone, access the subscriber's personal profile pages from the subscriber's personal voice web and verify the subscriber's credentials over the telephone.
Each system subscriber is given (i) an account number (ii) a personal identification number (PIN) and (iii) a service calling number. In order to access a personal voice web, the subscriber calls the service calling number and uses account information and the PIN to initiate a subscriber authentication process. FIG. 6 is a flow diagram of a subscriber authentication method 600 in accordance with the present invention. The subscriber authentication method 600 includes authentication signature creation form processing and subscriber authentication processing.
A subscriber initiates access 601 of his or her personal voice web 300 by calling the service calling number using a conventional telephone or a similar voice activated device computer configured to access the public telephone network. After the subscriber initiates access 601, a login agent starts login processing 602.
During login processing 602, the login agent answers the call and presents a standard login form to the subscriber. A login form is a voice form for collecting and submitting login information including subscriber account number and the subscriber PIN. After a subscriber enters the login information (into the login form) and submits the login form, the login agent uses the login information to retrieve the URL of the subscriber's personal voice web home page 301. The login agent retrieves the URL by looking up the subscriber's account number in the voice web subscriber directory. The login agent additionally verifies the PIN which was submitted. Upon verification of the PIN, the login agent presents 603 the subscriber's voice authentication form to the subscriber over the telephone. As part of the presentation, the login agent requests the subscriber to supply a personalized voice authentication sample. The login agent then waits 604 for the subscriber to supply the sample and submit 605 the form. After the subscriber submits 604 the form, the login agent processes 606 the submitted form. During processing 606 of the submitted form, the login agent accesses the subscriber's personal authentication page from the subscriber's personal voice web profile (linked to the subscriber's home page) and attempts to retrieve the voice authentication signature. If this is the first time the subscriber is accessing the service, the signature will be missing from the subscriber's authentication page. In this case, the login agent presents 607 the authentication signature creation form to the subscriber.
Using the options presented in the signature creation form, the subscriber selects the option to create or modify the personal voice authentication signature.
Following the instructions provided by the login agent, the subscriber fills in 608 the voice authentication signature creation form and records a personalized voice phrase as an authentication signature. After filling in 608 the signature creation form, the subscriber submits the form to the login agent. The login agent waits until the signature creation form is submitted 609. The login agent then processes 610 the recorded phrase converting it into a signature pattern and linking it to the user authentication page as a MIME resource for future verification.
If however, after processing 606, the login agent determines that there is an authentication signature stored in the subscriber's personal profile then the login agent perform a test 611 to determine whether there is a match between the stored authentication signature and the voice sample submitted by the subscriber. If test 611 determines that there is a match between the sample and the signature, then the subscriber is given access to the personal voice web and the voice web. Test 611 uses conventional voice authentication methods. A “match” is determined by test 611 when the conventional voice authentication method determines that the speaker's voice print or voice signature matches a master stored voice print or voice signature within a specified tolerance. If, however, the test determines that there is not a match between the sample and the signature, then the subscriber is denied access 613.
Automatic speech recognition falls into three categories: speaker dependent, speaker adaptive, and speaker independent. A speaker dependent system is developed to work for a single speaker and are usually easier to develop, cheaper to buy and more accurate but requires the use of user-specific speech training files.
The size of the vocabulary of a speech recognition system affects the complexity, processing requirements and the accuracy of the system. Referring now again to FIG. 3, personal voice web 300 uses small to medium sized vocabularies (ten to hundred of words).
An isolated-word or discrete speech system operates on single words at a time requiring a pause between each word utterance. This conventional type of speech recognition is a simple form of recognition to perform because the end points are easier to find and the pronunciation of a word tends not to affect others. As the occurrences of the words are more consistent and sharply delimited they are easier to recognize. Personal voice web 300 focuses on discrete speech and in particular on speech used for command and control.
Personal voice web 300 typically uses speech coded at 8 kHz using 8 bit samples resulting in 64 kbps bandwidth and storage. Conventional adaptive pulse code modulation (ADPCM) techniques can reduce the bandwidth to 16 kbps without loss of information.
Personal voice web 300 uses conventional speaker dependent recognition of discrete speech. This conventional speaker dependent recognition relies on digital sampling of the word utterances. After sampling, the next stage is acoustic signal processing. Most techniques include spectral analysis. This is followed by recognition of phonemes, groups of phonemes and words. This stage uses many conventional processes such as Dynamic Time Warping, Hidden Markov Modeling, Neural Networks, expert systems and combination of techniques. Hidden Markov Modeling based techniques are commonly used and generally the most successful approach. Additionally, personal voice web 300 uses some knowledge of the language to aid the recognition process.
Personal voice web 300 improves speaker dependent recognition of discrete speech in a command and control context using universally accessible personal speech training profiles 401-427. As described above, the personal speech training pages 401-427 are organized as a linked collection of voice web profile pages each linked to the corresponding personal voice web service page. Thus, the personal speech training profile pages parallel the personal voice web service pages in structure as shown in FIGS. 3 and 5. Each speech training page 401-427 contains the training vocabulary for browser command and control that is context dependent.
Each service page 301-327 linked to the personal voice web home page 401 has a corresponding speech training page 402-427. The personal voice web 300 is constructed in such a way that each voice web service page 302-327 links to its corresponding speech training page 401-427 using its URL. As the subscriber navigates from service page to service page in the personal voice web 300, the system is able to access the corresponding speech training page using its embedded URL.
Each speech training page 401-427 contains a set of command and control key words and their personalized speech recognition patterns representing the context sensitive vocabulary for the corresponding service page. For example, the calendar and appointments service page 309 is linked to a corresponding speech training page 409 containing key words and recognition patterns for “year”, “month”, “day”, the names of the months and days, digits representing dates and times etc. Similarly, stock portfolio page 311 is linked to a corresponding speech training page 411 containing key words and recognition patterns for “stock”, “quote”, “volume”, “option”, “symbol”, names of companies in the portfolio etc.
FIG. 7 is a flow diagram of a speech recognition process 700 in accordance with the present invention. The process is initiated after a subscriber has gained access 701 to the personal voice web in accordance with the process described in reference to FIG. 6. Once the subscriber gains access to the personal voice web 701, the login agent accesses the subscriber's personal voice web home page and presents 702 the home page to the subscriber over the phone. During the process of presenting 702 the home page, the login agent loads the personal voice web profile page 302 and the speech profile page 501 containing the command and control vocabulary for the home page. This vocabulary includes the basic voice web browser command and control as well as home page specific command and control. From the home page, the subscriber requests a particular service (i.e. personal administrative assistant, the personal helpdesk or the personal catalog store). The home page agent determines 703 what service the subscriber has selected and in response, invokes 704 the selected service and then proceeds to deliver 705 the service. During invocation 704 of the service, both the service page and the speech training page associated with the service page are loaded on the voice web gateway where the voice web browser uses them to deliver the service and improve speech recognition.
During delivery 705 of the selected service, the service agent uses the speech training page associated with the selected service to recognize voice commands submitted 720 by the subscriber. Specifically, the service agent obtains the speech training profile, embeds it in the service page as a MIME resource and forwards it to the voice web browser which uses the training profiles to improve recognition. Thus, responding to the subscriber's voice commands pertinent to the accessed voice web service page, the voice web browser recognizes the command and control word utterances (the subscriber's voice commands that are submitted 720) and matches them against the personalized vocabulary in the corresponding voice web speech training page for accurate speaker dependent recognition of discrete speech.
If the subscriber requests access to a new service page linked to a currently accessible service page, the currently active service agent exits 706 the current service and then invokes 704 the requested service. During the invocation of the requested service, the requested voice web service page corresponding to the requested service is loaded as well as the corresponding speech training page containing the matching command and control vocabulary. In this process 700, the active service agent always uses the most appropriate vocabulary for the existing context thereby greatly reducing the size of the active vocabulary that needs be accessed while significantly improving the speaker dependent recognition.
Query customization uses stored subscriber attributes and preferences to customize queries of service databases. Query customization is accomplished by maintaining user attributes and preferences in a collection of voice web pages 501-527 (described above in reference to FIG. 5) that parallel the corresponding voice web service pages 301-327 (described above in reference to FIG. 6) and using the attribute and preferences information corresponding to the service requested to customize the query parameters within forms.
Referring now again to FIG. 5, the attributes and preferences pages 501-527 parallel the personal voice web service pages 301-327 in structure as shown in FIG. 3. Each service page linked to the personal voice web home page 301 has a corresponding voice web attributes and preferences page linked to it. The personal voice web 300 is constructed in such a way that each voice web service page 301-327 links to its corresponding voice web attributes and preferences page 501-527 using its URL. As the subscriber navigates from service page to service page in the personal voice web 300, the system is able to access the corresponding voice web attributes and preferences page using its embedded URL.
A subscriber of voice web services requests information by accessing a voice web service page and having it played by the corresponding agent (i.e. administrative assistant, helpdesk or commerce agent). The subscriber requests service through submitting a query form presented by the corresponding agent. The query form is an HVML form for touch tone and voice data input. When a service is requested by the subscriber, the agent retrieves the corresponding voice web attributes and preferences page and automatically fills the query form with appropriate default parameters obtained from the subscriber's attributes and preferences. For example if the subscriber is accessing the weather service page, the agent fills in the subscriber's home town and other chosen cities automatically from the subscriber's attributes and preferences page. Similarly, if the subscriber is accessing the stock portfolio service page, the agent accesses the corresponding attributes and preferences page and fills in the subscriber's chosen portfolio of stocks in the query form. In addition, the agent also automatically fills in the appropriate subscriber attributes such as his/her access account number, password etc., thereby easing the subscriber's access while exploiting the availability services through web based queries.
FIG. 8 is a flow diagram of a query customization process 800 in accordance with the present invention. The process is initiated after a subscriber has gained access 801 to the personal voice web in accordance with the process described in reference to FIG. 6. Once the subscriber gains access 801 to the personal voice web, the login agent accesses the subscriber's personal voice web home page and presents 802 the home page to the subscriber over the phone.
During the process of presenting 802 the home page, the login agent loads the attributes and preferences page 501 from the subscriber's voice web personal profile. Attributes and preferences page 501 contains preferences for the home page 301. From the home page 301, the subscriber accesses the targeted voice web service page by navigating the appropriate hyper links from the voice web home page 301. In response, the selected service is invoked 803 and the selected service then proceeds to deliver 804 the service. During invocation 803 of the selected service, both the service page and the attributes and preferences page associated with the service page are extracted by the service agent.
During delivery 804 of the selected service, the service agent uses the attributes and preferences page associated with the selected service to customize queries of the associated service database. More specifically, using the attributes and preferences information, the service agent automatically fills in the needed fields in the corresponding query form with user specified defaults and preferences. Having filled the appropriate fields, the service agent plays the remaining query form to the subscriber thereby greatly reducing the information that the subscriber has to supply on the telephone. The service agent then obtains the remaining information, if any, from the subscriber and submits the query form to the service database. When the results are returned (i.e. the information is retrieved from the service database), the service agent plays the results to the subscriber over the telephone.
In another aspect of the invention, voice web system 100 enables publishers to compose voice web forms and pages statically using ordinary word processing programs and link them to voice files created using ordinary audio capture and editing tools available on personal computers and workstations. Alternatively, voice web agents can dynamically compose voice web pages and forms based on user requests and optionally profiles as well as accessed databases and services. Advantageously, dynamic form-based publication enables information and service providers to publish voice web pages using the conventional telephone without the need for any additional computer based voice web publishing tools. Dynamic form-based publication is achieved by combining voice web publishing forms, voice web publishing agents and voice web page publishing templates.
FIG. 9 is a flow diagram of a voice publishing method in accordance with the present invention. The method presents 901 a voice web form to a caller calling into a voice web system using a conventional telephone. Voice web publishing forms are specially designed voice web forms that when interpreted (i.e. when played back) using the voice browser prompt the caller (the voice information publishers) to input voice and touch tone based input using a telephone. The forms guide the caller step by step to supply the needed information, edit and modify the information and finally submit 903 the information for processing 902.
Voice web publishing agents process 902 the filled voice web publishing forms extracting and separating voice information and touch tone input. Based on the touch tone inputs, the agents may present additional publishing forms to the caller (publisher). The voice information is stored 904 in voice files and linked to the corresponding voice web page publishing template by substituting variables within the page template with the generated files. The touch tone input is used whenever the caller (publisher) needs to input alphanumeric information that can be processed by the publishing agent.
Without limiting the general applicability of form based voice web page publishing, a specific application of the process of form-based publishing is next described. The exemplary form based publishing process relates to the publication of voice web business white pages, yellow pages and order entry pages. FIG. 10 shows a white-yellow-order page system 1000 in accordance with the present invention. Voice web business white pages 1001 are voice web pages that are dynamically composed by the voice web business white pages agent 1003 from a business white page database 1002 information including the name, address, phone number of businesses. The white pages agent 1003 presents a search form to a caller for specifying the name of the business and allows further narrowing of the search by city and state. Each business white page can be linked to a corresponding business yellow page 1004. Business yellow pages 1004 contain additional information about the business including a tag line, advertisement, directions, working hours, and promotions. In addition, each yellow page 1004 can be linked to a corresponding business order entry form 1005. Business order entry forms 1005 allow users to order products and services or transact business by specifying product or service codes, preferences, quantity, and credit card numbers for payment.
A participating business can publish a voice web yellow page 1004 by simply filing a corresponding voice web yellow page publishing form 1007. A yellow page publishing agent 1006 processes the yellow page publishing form 1007 and dynamically generates a business yellow page 1004 for that business from a standard yellow page template by replacing variables in the template with values supplied by the submitted yellow page publishing form.
The yellow page publishing agent 1006 (a publishing agent) presents a yellow page voice web publishing form 1007 to the participating business. Voice web publishing forms are specially designed voice web forms that when interpreted (i.e. when played back) using the voice browser prompt the caller (the voice information publishers) to input voice and touch tone based input using a telephone. Yellow page publishing form 1007 guides the caller step by step to supply the needed information, edit and modify the information and finally submit the information for processing, as described in reference to FIG. 9. Specifically, yellow page publishing form 1007 prompts for voice information including name, tag line, advertisement, directions, working hours and promotions. In addition, the yellow page publishing agent 1006 prompts for touch tone input including the account number, password, phone number, yellow page category code and credit card number. Yellow page publishing agent 1006 uses the account number to identify the business, the password to verify the business, the phone number to link it to the corresponding white page, the yellow page category code to classify the business within business yellow pages, and the credit card number to pay for the business yellow page. Once the business is identified and verified, yellow page publishing agent 1006 dynamically creates a business yellow page 1004 from a standard template for the appropriate category. Yellow page publishing agent 1006 uses the supplied business phone number to match with the appropriate database entry in the business white pages and updates it with the URL of the newly created yellow page to link it.
A very similar process occurs for publishing order entry forms. A business order entry form publishing agent, order page publishing agent 1008 presents an appropriate order entry publishing form 1009 to a participating business. Order page publishing agent 1008 requests for appropriate customized prompts for specific fields in the business order entry form such as product or service code, customer preferences, quantity, credit card number etc. Order page publishing agent 1008 also requests for touch tone input for the account number, password, phone number, and credit card number. Order page publishing agent 1008 uses the account number and password for identification and verification, the phone number to link it to the corresponding yellow page 1004 and the credit card number for payment for the order entry form. Once the business is identified and verified, order page publishing agent 1008 dynamically generates an order entry form for that business by filling the supplied information into a standard order entry template for that business category. Order page publishing agent 1008 uses the supplied business phone number to match with the appropriate database entry in the business white pages, updates it with the URL of the newly created order entry page, locates the corresponding yellow page using its URL in the database, and updates it to link to the newly created order entry page.
The foregoing discussion discloses and describes merely exemplary embodiments of the present invention. As will be understood by those familiar with the art, the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
Hyper Voice Markup Language consists of a set of extensions to existing HTML. Some of the extensions are new elements with new tags and attributes. Others are extensions to existing elements in the form of new attributes. All attribute values are shown as % value type %.
In-line Voice components
The primary mechanism for introducing voice prompts into an HTML page is a new inline voice HVML element similar to the in-line image HTML element. The tag for this element is “VOICE” and it has many variations. Each variation is specified by value of the TYPE attribute. Depending on the type, each variation has additional attributes.
<VOICE TYPE=“File” SRC=“% URL %” TEXT=“% text %”>
VOICE tag with TYPE set to “File” indicates a file containing pre-recorded voice information. It's attributes are SRC and TEXT. SRC attribute specifies the URL for the voice file and TEXT attribute, which is optional, specifies the text that can be translated to speech as an alternative to the voice file.
Voice Index Files
<VOICE TYPE=“Index” SRC=“% URL %” INDEX=“% index %” TEXT=“% text %”>
VOICE tag with TYPE set to “Index” indicates an indexed file containing pre-recorded voice phrases. It's attributes are SRC, INDEX and TEXT. SRC and TEXT have same meaning as in Voice Files. The INDEX attribute specifies index of the phrase within the file either as a number or a label.
<VOICE TYPE=“File” SRC=“myweb/home/greeting.wav”>
<VOICE TYPE=“Text” TEXT=“% text %”>VOICE tag with TYPE set to “Text” indicates a text-to-speech string. It's attribute is TEXT which specifies the string that needs to be translated to speech.
<VOICE TYPE=“Text” TEXT=“Welcome to your Home Page”>
<VOICE TYPE=“Stream” VALUE=“% URL %” TERMINATE=“% tone %”>
VOICE tag with TYPE set to “Stream” indicates a continuous voice stream identified by its URL. The browser accesses the voice stream and continuously plays it to the user. It's attribute is TERMINATE which specifies the tone the user can enter to terminate the playback.
<VOICE TYPE=“Money” VALUE=“% number %” FORMAT=“% format %”>
VOICE tag with TYPE set to “Money” indicates a number that needs to be presented as currency. It's attributes are VALUE and FORMAT. VALUE specifies the decimal value of the number and FORMAT, which is optional, specifies the currency type such as “US Dollar”, “British Pound” etc. The default value for FORMAT is “US Dollar”.
<VOICE TYPE=“Number” VALUE=“% number %” FORMAT=“% format %”>
VOICE tag with TYPE set to “Number” indicates a number that needs to be presented as a decimal number. It's attributes are VALUE and FORMAT. VALUE specifies the decimal value and FORMAT, which is optional, specifies the precision to be conveyed. Digits after the decimal point are pronounced as characters. Default value for the FORMAT is 2 which indicates 2 digit precision after decimal point.
<VOICE TYPE=“Character” VALUE=“% string %>
VOICE tag with TYPE set to “Character” indicates a sequence of characters that are to be presented separately with no pauses in between. It's attribute is VALUE which specifies the sequence of characters as string.
<VOICE TYPE=“Date” VALUE=“% date %” FORMAT=“% format %”>
VOICE tag with TYPE set to “Date” indicates an expression that is to be presented as a date. It's attributes are VALUE and FORMAT. VALUE attribute specifies the expression and the FORMAT attribute, which is optional, specifies the format of the expression. Default format is MM/DD/YY.
<VOICE TYPE=“Ordinal” VALUE=“% number %”>
VOICE tag with TYPE set to “Ordinal” indicates a number that is to be presented as an ordinal (i.e. as Nth value). It's attribute is VALUE which specifies the number. Values are pronounced as “first”, “second”, “third” etc.
<VOICESTRING NAME=“% name %”>
. . . Voice Components . . .
VOICESTRING tag indicates a sequence of voice components that are grouped together for presentation without any pauses in between. Each of the voice components can be any of the primitives previously defined. The voice browser gathers the individual components and plays them together in sequence.
<Voice TYPE=“Index” SRC=“welcome.vap” INDEX=“begin” TEXT=“Welcome”>
<Voice TYPE=“File” SRC=“username.vox” TEXT=“user's name”>
<Voice TYPE=“Index” SRC=“welcome.vap” INDEX=“end” TEXT=“to VOIS NET”
The voice browser “plays” each in-line voice component in sequence as it encounters it in the HVML page starting from the beginning of the page. Each voice component is played only once for each presentation. A “reload” command would cause the voice browser to re-play the page.
Of course, voice elements can also be invoked by hyper links pointing to voice files containing digitized voice data. This is similar to existing HTML conventions. The voice browser simply fetches the new page and plays it once. In the next section, we will discuss how hyperlinks can be invoked using touch tone or key word input.
Voice responsive labels for hyper-links
In order to invoke hyper links embedded in a HVML page, two new attributes “TONE” and “LABEL” are added to the anchor element. These attributes are used in conjunction with the existing HREF attribute in an anchor element that makes the anchor into a hyper link. When the user selects the touch tone signals specified by the value of the TONE attribute followed by the “#” tone or utters the word specified by the LABEL attribute, the browser invokes the corresponding hyper link. The TONE and LABEL attribute values must be unique within a page.
<A HREF=“myweb/home/greeting.vml TONE=“HELLO”>
<A HREF=“myweb/home/greeting.vml LABEL=HELLO”>
When the user presses “H, E, L, L, O, #” on the touch tone phone or the user says the word “HELLO” on the phone, the browser will invoke the corresponding hyper link and accesses the “greeting.vml” page.
Keyword accessible indexes for anchors
HTML allows the index access of fragments within a page by unique labels associated with anchors surrounding the fragment. The NAME attribute in an anchor element specifies a label that is unique within the page. This label can then be used as an index by the browser to search for the fragment by matching the unique label with the one supplied in the hyperlink. The hyperlink for the indexed fragment uses the regular URL for the age concatenated with the fragment's unique label with a “#” separator.
Coupled with voice responsive hyper links, fragment labels can be used to construct simple menus or database searches.
Suppose “myweb/home/prompts.vml” contains the following HVML text.
<VOICE TEXT=“Press CAL# for Calendar”>
<VOICE TEXT=“Press ADDR# for Address Book”>
<VOICE TEXT=“Press EMAIL for Electronic Mail”>
Suppose another HVML page contains the following hyperlinks.
<A HREF=“myweb/home/prompts.vml#prompt1” TONE=“1”>Press 1 to hear Prompt1</A>
<A HREF=“myweb/home/prompts.vml#prompt2” TONE=“2”>Press 2 to hear Prompt2</A>
<A HREF=“myweb/home/prompts.vml#prompt3” TONE=“3”>Press 3 to hear Prompt3</A>
Then, if the user presses “1, #”, the browser will fetch the “myweb/home/prompts.vml” HVML page, match “prompt1” index with the first anchor's “prompt1” label, and start presenting the prompts starting with text-to-speech translation of “Press CAL# for Calendar”.
<PAUSE TIMEOUT=“% seconds %” TERMINATE=“% tone %”>
In order to let the voice page publisher to control the behavior of the voice browser, HVML defines a tag “Pause” with “TIMEOUT” and “TERMINATE” attributes. When the browser encounters a PAUSE statement, it pauses until either the amount of time specified in the TIMEOUT attribute elapses or the user enters the tone specified in the “TERMINATE” attribute. If the values of the TIMEOUT attribute is 0, then the browser waits there indefinitely. The default value for TIMEOUT is 1 second. Default value for TERMINATE is “#”.
Voice Responsive Forms
HVML uses the FORM tag to enable user input similar to HTML including the METHOD attribute which specifies the way parameters are passed to the server and the ACTION attribute which specifies the procedure to be invoked by the server to process the form. HVML extends the INPUT tag within forms by introducing VOICEINPUT tag. VOICEINPUT takes a TYPE attribute similar to the INPUT tag with three new values “voice”, “tone” and “review” in addition to the existing “reset” and “submit” values. The HVML browser pauses at each VOICEINPUT statement in a HVML form until the specified input is supplied or input is terminated before processing the remaining form.
The VOICEINPUT tag with TYPE value set to “voice” indicates a form that accepts voice input. Usually, a voice prompt or text-to-speech segment precedes the VOICEINPUT tag alerting the user that input is required and how to terminate input. The user is expected to speak and this message is recorded in real-time and supplied to the Voice Web server for processing. The VOICEINPUT tag containing “voice” value for the TYPE attribute also supports a MAXTIME attribute which specifies the maximum recording time for the message and a TERMINATE attribute which specifies the touch tone that terminates input. If the MAXTIME attribute is not specified, then the default value of “15” is assumed. If TERMINATE attribute is not specified, then the default value of “#” is assumed. For example, if the MAXTIME value is 20 and TERMINATE value is “#”, then recording terminates when the user presses “#” or 20 seconds of time elapses.
The VOICEINPUT tag with TYPE value set to “tone” indicates a form that accepts touch tone input. Again, a voice prompt or a text-to-speech segment precedes the VOICEINPUT tag alerting the user for input. The user is expected to press a sequence of touch tones which are recorded and supplied to the Voice Web server for processing. The VOICEINPUT tag containing “tone” value for the TYPE attribute also supports a MAXDIGITS attribute which specifies the maximum number of touch tone digits that can be supplied and a TERMINATE attribute which specifies the touch tone that terminates input. If the MAXDIGITS attribute is not specified, then the default value of “20” is assumed. If TERMINATE attribute is not specified, then the default value of “#” is assumed. For example, if the MAXDIGITS value is 10 and TERMINATE value is “#”, then input process terminates when the user presses “#” or 10 digits are supplied.
The VOICEINPUT tag with TYPE value set to “review” indicates that the current values of the form can be reviewed by selecting the “review” input. The VOICEINPUT tag with TYPE value set to “reset” indicates that the current values of the form should be reset to their original defaults. The VOICEINPUT tag with TYPE value set to “submit” indicates that the current form should be submitted to the server. Each of these three TYPE values support a SELECTTONES attribute and a SKIPTONES attribute. SELECTTONES attribute specifies the sequence of touch tones that activates the corresponding selection. SKIPTONES attribute specifies the sequence of touch tones that skips the selection. If the SELECTTONES attribute is not specified, then the default value of “#” is assumed and if the SKIPTONES attribute is not specified, then the default value of “*” is assumed.
For example, if the SELECTTONES attribute value is “REVIEW” and SKIPTONES attribute value is “SKIP” for a VOICEINPUT element with TYPE value set to “review”, the user can enter “REVIEW’ to review the form values or enter “SKIP” to skip the selection. VOICEINPUT tag with TYPE value set to “submit” similarly indicates the values of the form can be submitted to the server. If the SELECTTONES attribute value is “DONE” and the SKIPTONES attribute value is “**”, the user can either enter “DONE” to submit the form or press “**” to skip the selection. VOICEINPUT tag with TYPE value set to “reset” similarly indicates that the values of the form be reset to their original values.
All browser commands must start with the “*” key. Each browser command is associated with one or more key words that uniquely identify it. For example, in order to activate “Home” command, the user would press “*home” on the telephone key pad. The key words are chosen in such a way to generate unique dial tone sequences. A set of default browser commands are listed below with the keyword and description of the command. Alternatively, the browser commands can also be issued by vocalizing the corresponding commands. For example, to activate the “Home” command, the user would say “home” on the telephone.
Jump to the previous page from which the current page was accessed via a hyper link. This command is activated by pressing “*pr” (*77) or “*prev” (*7738) sequence.
Jump to the next page in a sequence of hyper links. This command is activated by pressing “*n” (*6) or “next” (*6398) sequence.
Present the titles of the pages accessed so far in the order of their hyper link access sequence. Pause after each title. If the user presses “#”, then jump to the page specified by the title. If not, proceed to the next title. This command is activated by pressing “*hi” (*44) or “*hist” (4478) sequence.
Jump to the first page in the sequence of hyper links. This command is activated by pressing “*ho” (*46) or “*home” (*4663) sequence.
Reload the current page again from the Web server. This command is activated by pressing “*re” (*73) or “*relo” *(7356) sequence.
Jump to the home page of the help page set. Help pages are navigated in exactly the same way as ordinary HVML pages. However, a new browser instance is created on activation which must be “exited” to get back to the page context from which “Help” page set was accessed. This command is activated by pressing “*h” (*4) or “*help” (*4357) sequence.
Jump to the home page of the Fax dialog session using HTML forms. Again, a new browser instance is created on activation which must be “exited” to get back to the page context from which “Fax” dialog session was activated. This command is activated by pressing “*fa” (*32) “*fax” (*329) sequence.
Stop loading the page that is currently being accessed. This command is activated by pressing “*t” (*8) or “*stop” (*7867) sequence.
Exit the current instance of the browser and return to the page being accessed in the previous instance of the browser. If this is the first instance of the browser, then exit the browser and hang-up the phone. This command is activated by pressing “*x” (*9) or “* exit” (*3948) sequence.
Present the titles of the pages selected as bookmarks in the order of their hyper link access sequence. Pause after each title. If the user presses “#”, then jump to the page specified by the title. If not, proceed to the next title. This command is activated by pressing “*bo” (*26) or “*book” (*2665) sequence.
When the Voice browser is activated to play back voice prompts or speech segments, an additional set of browser commands are available to the user to control the playback.
Pause the play back at current position. This command is activated by pressing “*p” (*7) or “*pause” (*72873).
Continue play back from current position. This command is activated by pressing “*p” (*7) or “*play” (*7529).
Back up the play back position by 5 seconds and start play back. The command is activated by pressing “*b” (*2) or “*back” (*2225). Repeated pressing of the same tone implies successive back up by 5 seconds for each tone.
Forward the play back position by 5 seconds and start play back. The command is activated by pressing “*f” (*3) or “*frwd” (*3793). Repeated pressing of the same tone implies successive skip forward by 5 seconds for each tone.
Back up the play back position to the beginning of the play back sequence and start play back. The command is activated by pressing “*0”.
Jump to the end of the play back sequence, backup by 5 seconds and start play back. The command is activated by pressing “*1”.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4053710||1 Mar 1976||11 Oct 1977||Ncr Corporation||Automatic speaker verification systems employing moment invariants|
|US4253157||29 Sep 1978||24 Feb 1981||Alpex Computer Corp.||Data access system wherein subscriber terminals gain access to a data bank by telephone lines|
|US4534056||26 Aug 1982||6 Aug 1985||Westinghouse Electric Corp.||Voice-recognition elevator security system|
|US4648061||21 Feb 1986||3 Mar 1987||Machines Corporation, A Corporation Of New York||Electronic document distribution network with dynamic document interchange protocol generation|
|US4653097||23 May 1986||24 Mar 1987||Tokyo Shibaura Denki Kabushiki Kaisha||Individual verification apparatus|
|US4659877 *||16 Nov 1983||21 Apr 1987||Speech Plus, Inc.||Verbal computer terminal system|
|US4763278||13 Apr 1983||9 Aug 1988||Texas Instruments Incorporated||Speaker-independent word recognizer|
|US4785408||11 Mar 1985||15 Nov 1988||AT&T Information Systems Inc. American Telephone and Telegraph Company||Method and apparatus for generating computer-controlled interactive voice services|
|US4788643||27 Aug 1985||29 Nov 1988||Trippe Kenneth A B||Cruise information and booking data processing system|
|US4831551||13 Oct 1987||16 May 1989||Texas Instruments Incorporated||Speaker-dependent connected speech word recognizer|
|US4833713||4 Sep 1986||23 May 1989||Ricoh Company, Ltd.||Voice recognition system|
|US4839853||15 Sep 1988||13 Jun 1989||Bell Communications Research, Inc.||Computer information retrieval using latent semantic structure|
|US4896319||31 Mar 1988||23 Jan 1990||American Telephone And Telegraph Company, At&T Bell Laboratories||Identification and authentication of end user systems for packet communications network services|
|US4922538||5 Feb 1988||1 May 1990||British Telecommunications Public Limited Company||Multi-user speech recognition system|
|US4945476||26 Feb 1988||31 Jul 1990||Elsevier Science Publishing Company, Inc.||Interactive system and method for creating and editing a knowledge base for use as a computerized aid to the cognitive process of diagnosis|
|US4953085||15 Apr 1987||28 Aug 1990||Proprietary Financial Products, Inc.||System for the operation of a financial account|
|US4972349||14 Aug 1989||20 Nov 1990||Kleinberger Paul J||Information retrieval system and method|
|US4989248||3 Mar 1989||29 Jan 1991||Texas Instruments Incorporated||Speaker-dependent connected speech word recognition method|
|US5007081||5 Jan 1989||9 Apr 1991||Origin Technology, Inc.||Speech activated telephone|
|US5020107||4 Dec 1989||28 May 1991||Motorola, Inc.||Limited vocabulary speech recognition system|
|US5054082||26 Mar 1990||1 Oct 1991||Motorola, Inc.||Method and apparatus for programming devices to recognize voice commands|
|US5062074||30 Aug 1990||29 Oct 1991||Tnet, Inc.||Information retrieval system and method|
|US5127043||15 May 1990||30 Jun 1992||Vcs Industries, Inc.||Simultaneous speaker-independent voice recognition and verification over a telephone network|
|US5144672||28 Sep 1990||1 Sep 1992||Ricoh Company, Ltd.||Speech recognition apparatus including speaker-independent dictionary and speaker-dependent|
|US5146439||4 Jan 1989||8 Sep 1992||Pitney Bowes Inc.||Records management system having dictation/transcription capability|
|US5224163||28 Sep 1990||29 Jun 1993||Digital Equipment Corporation||Method for delegating authorization from one entity to another through the use of session encryption keys|
|US5243643||10 Oct 1991||7 Sep 1993||Voiceples Corporation||Voice processing system with configurable caller interfaces|
|US5247497||18 Nov 1991||21 Sep 1993||Octel Communications Corporation||Security systems based on recording unique identifier for subsequent playback|
|US5247575||24 Apr 1992||21 Sep 1993||Sprague Peter J||Information distribution system|
|US5255305||1 Nov 1990||19 Oct 1993||Voiceplex Corporation||Integrated voice processing system|
|US5274695 *||11 Jan 1991||28 Dec 1993||U.S. Sprint Communications Company Limited Partnership||System for verifying the identity of a caller in a telecommunications network|
|US5278942||5 Dec 1991||11 Jan 1994||International Business Machines Corporation||Speech coding apparatus having speaker dependent prototypes generated from nonuser reference data|
|US5293452||1 Jul 1991||8 Mar 1994||Texas Instruments Incorporated||Voice log-in using spoken name input|
|US5297183||13 Apr 1992||22 Mar 1994||Vcs Industries, Inc.||Speech recognition system for electronic switches in a cellular telephone or personal communication network|
|US5297194||22 Jun 1992||22 Mar 1994||Vcs Industries, Inc.||Simultaneous speaker-independent voice recognition and verification over a telephone network|
|US5325421||24 Aug 1992||28 Jun 1994||At&T Bell Laboratories||Voice directed communications system platform|
|US5335276||16 Dec 1992||2 Aug 1994||Texas Instruments Incorporated||Communication system and methods for enhanced information transfer|
|US5335313||3 Dec 1991||2 Aug 1994||Douglas Terry L||Voice-actuated, speaker-dependent control system for hospital bed|
|US5343529||28 Sep 1993||30 Aug 1994||Milton Goldfine||Transaction authentication using a centrally generated transaction identifier|
|US5355433||18 Mar 1991||11 Oct 1994||Ricoh Company, Ltd.||Standard pattern comparing system for eliminating duplicative data entries for different applications program dictionaries, especially suitable for use in voice recognition systems|
|US5359508||21 May 1993||25 Oct 1994||Rossides Michael T||Data collection and retrieval system for registering charges and royalties to users|
|US5365574||25 Nov 1992||15 Nov 1994||Vcs Industries, Inc.||Telephone network voice recognition and verification using selectively-adjustable signal thresholds|
|US5388213||29 Oct 1993||7 Feb 1995||Apple Computer, Inc.||Method and apparatus for determining whether an alias is available to uniquely identify an entity in a communications system|
|US5390278||8 Oct 1991||14 Feb 1995||Bell Canada||Phoneme based speech recognition|
|US5410698||12 Oct 1993||25 Apr 1995||Intel Corporation||Method and system for dynamic loading of software libraries|
|US5430827||23 Apr 1993||4 Jul 1995||At&T Corp.||Password verification system|
|US5448625||13 Apr 1993||5 Sep 1995||Msi Electronics Inc.||Telephone advertising method and apparatus|
|US5452340||1 Apr 1993||19 Sep 1995||Us West Advanced Technologies, Inc.||Method of voice activated telephone dialing|
|US5452341||15 Oct 1993||19 Sep 1995||Voiceplex Corporation||Integrated voice processing system|
|US5452397||11 Dec 1992||19 Sep 1995||Texas Instruments Incorporated||Method and system for preventing entry of confusingly similar phases in a voice recognition system vocabulary list|
|US5454030||8 Feb 1995||26 Sep 1995||Alcatel N.V.||Network of voice and/or fax mail systems|
|US5463715||30 Dec 1992||31 Oct 1995||Innovation Technologies||Method and apparatus for speech generation from phonetic codes|
|US5465290||16 Dec 1993||7 Nov 1995||Litle & Co.||Confirming identity of telephone caller|
|US5479491||16 Dec 1994||26 Dec 1995||Tele Guia Talking Yellow Pages||Integrated voice-mail based voice and information processing system|
|US5479510||15 Nov 1994||26 Dec 1995||Olsen; Kurt B.||Automated data card payment verification method|
|US5483580||19 Mar 1993||9 Jan 1996||Octel Communications Corporation||Methods and apparatus for non-simultaneous transmittal and storage of voice message and digital text or image|
|US5485370||25 Aug 1993||16 Jan 1996||Transaction Technology, Inc.||Home services delivery system with intelligent terminal emulator|
|US5486686||18 May 1992||23 Jan 1996||Xerox Corporation||Hardcopy lossless data storage and communications for electronic document processing systems|
|US5487671||21 Jan 1993||30 Jan 1996||Dsp Solutions (International)||Computerized system for teaching speech|
|US5490251||9 Aug 1991||6 Feb 1996||First Data Resources Inc.||Method and apparatus for transmitting data over a signalling channel in a digital telecommunications network|
|US5499288||22 Mar 1994||12 Mar 1996||Voice Control Systems, Inc.||Simultaneous voice recognition and verification to allow access to telephone network services|
|US5510777||28 Dec 1993||23 Apr 1996||At&T Corp.||Method for secure access control|
|US5513272||5 Dec 1994||30 Apr 1996||Wizards, Llc||System for verifying use of a credit/identification card including recording of physical attributes of unauthorized users|
|US5517605||11 Aug 1993||14 May 1996||Ast Research Inc.||Method and apparatus for managing browsing, and selecting graphic images|
|US5526520||21 Sep 1993||11 Jun 1996||Krause; Gary M.||Method to organize and manipulate blueprint documents using hypermedia links from a primary document to recall related secondary documents|
|US5530852||20 Dec 1994||25 Jun 1996||Sun Microsystems, Inc.||Method for extracting profiles and topics from a first file written in a first markup language and generating files in different markup languages containing the profiles and topics for use in accessing data described by the profiles and topics|
|US5533115||4 Nov 1994||2 Jul 1996||Bell Communications Research, Inc.||Network-based telephone system providing coordinated voice and data delivery|
|US5534855||15 Dec 1994||9 Jul 1996||Digital Equipment Corporation||Method and system for certificate based alias detection|
|US5537586||6 May 1994||16 Jul 1996||Individual, Inc.||Enhanced apparatus and methods for retrieving and selecting profiled textural information records from a database of defined category structures|
|US5542046||2 Jun 1995||30 Jul 1996||International Business Machines Corporation||Server entity that provides secure access to its resources through token validation|
|US5544255||31 Aug 1994||6 Aug 1996||Peripheral Vision Limited||Method and system for the capture, storage, transport and authentication of handwritten signatures|
|US5544322||9 May 1994||6 Aug 1996||International Business Machines Corporation||System and method for policy-based inter-realm authentication within a distributed processing system|
|US5548726||17 Dec 1993||20 Aug 1996||Taligeni, Inc.||System for activating new service in client server network by reconfiguring the multilayer network protocol stack dynamically within the server node|
|US5550976||8 Dec 1992||27 Aug 1996||Sun Hydraulics Corporation||Decentralized distributed asynchronous object oriented system and method for electronic data management, storage, and communication|
|US5551021||25 Jul 1994||27 Aug 1996||Olympus Optical Co., Ltd.||Image storing managing apparatus and method for retreiving and displaying merchandise and customer specific sales information|
|US5608786 *||13 Feb 1995||4 Mar 1997||Alphanet Telecom Inc.||Unified messaging system and method|
|US5613012||17 May 1995||18 Mar 1997||Smarttouch, Llc.||Tokenless identification system for authorization of electronic transactions and electronic transmissions|
|US5799063 *||15 Aug 1996||25 Aug 1998||Talk Web Inc.||Communication system and method of providing access to pre-recorded audio messages via the Internet|
|US5884262 *||28 Mar 1996||16 Mar 1999||Bell Atlantic Network Services, Inc.||Computer network audio access and conversion system|
|US5915001 *||14 Nov 1996||22 Jun 1999||Vois Corporation||System and method for providing and using universally accessible voice and speech data files|
|US5923736 *||2 Apr 1996||13 Jul 1999||National Semiconductor Corporation||Hypertext markup language based telephone apparatus|
|US6233318 *||5 Nov 1996||15 May 2001||Comverse Network Systems, Inc.||System for accessing multimedia mailboxes and messages over the internet and via telephone|
|US6240448 *||20 Dec 1996||29 May 2001||Rutgers, The State University Of New Jersey||Method and system for audio access to information in a wide area computer network|
|1||Dave Krupinski; "Computer Telephony and the Internet"; 1996 Stylus Product Group; published on the World Wide Web at the URL "http://www.stylus.com"; publication date unknown but prior to Nov. 14, 1996; pages not numbered.|
|2||Hemphill, et al., "Surfing the Web by Voice", Multimedia '95, Oct. 1995, pp. 215-221.|
|3||Nahm, E.R., "Speech Recognition Makes Using the Internet Easier Than Ever-Press Release", Sep. 12, 1996, pp. 1-2.|
|4||Nahm, E.R., "Speech Recognition Makes Using the Internet Easier Than Ever—Press Release", Sep. 12, 1996, pp. 1-2.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US6529586 *||31 Aug 2000||4 Mar 2003||Oracle Cable, Inc.||System and method for gathering, personalized rendering, and secure telephonic transmission of audio data|
|US6539359 *||23 Aug 2000||25 Mar 2003||Motorola, Inc.||Markup language for interactive services and methods thereof|
|US6560576 *||25 Apr 2000||6 May 2003||Nuance Communications||Method and apparatus for providing active help to a user of a voice-enabled application|
|US6618806||6 Jul 1999||9 Sep 2003||Saflink Corporation||System and method for authenticating users in a computer network|
|US6636590 *||30 Oct 2000||21 Oct 2003||Ingenio, Inc.||Apparatus and method for specifying and obtaining services through voice commands|
|US6640228 *||10 Nov 2000||28 Oct 2003||Verizon Laboratories Inc.||Method for detecting incorrectly categorized data|
|US6658414 *||6 Mar 2001||2 Dec 2003||Topic Radio, Inc.||Methods, systems, and computer program products for generating and providing access to end-user-definable voice portals|
|US6718015 *||16 Dec 1998||6 Apr 2004||International Business Machines Corporation||Remote web page reader|
|US6728679 *||30 Oct 2000||27 Apr 2004||Koninklijke Philips Electronics N.V.||Self-updating user interface/entertainment device that simulates personal interaction|
|US6732142 *||25 Jan 2000||4 May 2004||International Business Machines Corporation||Method and apparatus for audible presentation of web page content|
|US6745123 *||30 Jun 2000||1 Jun 2004||Robert Bosch Gmbh||Method and device for transmitting navigation information from data processing center to an on-board navigation system|
|US6789060||31 Oct 2000||7 Sep 2004||Gene J. Wolfe||Network based speech transcription that maintains dynamic templates|
|US6799163 *||13 Sep 2002||28 Sep 2004||Vas International, Inc.||Biometric identification system|
|US6859776 *||4 Oct 1999||22 Feb 2005||Nuance Communications||Method and apparatus for optimizing a spoken dialog between a person and a machine|
|US6928405 *||5 Sep 2001||9 Aug 2005||Inventec Corporation||Method of adding audio data to an information title of a document|
|US6928547||7 Jul 2003||9 Aug 2005||Saflink Corporation||System and method for authenticating users in a computer network|
|US6934684 *||17 Jan 2003||23 Aug 2005||Dialsurf, Inc.||Voice-interactive marketplace providing promotion and promotion tracking, loyalty reward and redemption, and other features|
|US7016845 *||30 May 2003||21 Mar 2006||Oracle International Corporation||Method and apparatus for providing speech recognition resolution on an application server|
|US7027984 *||15 Mar 2001||11 Apr 2006||Hewlett-Packard Development Company, L.P.||Tone-based mark-up dictation method and system|
|US7085960||29 Oct 2002||1 Aug 2006||Hewlett-Packard Development Company, L.P.||Communication system and method|
|US7113572||3 Oct 2001||26 Sep 2006||Cingular Wireless Ii, Llc||System and method for recognition of and automatic connection using spoken address information received in voice mails and live telephone conversations|
|US7142648||23 Jul 2003||28 Nov 2006||Sprint Communications Company L.P.||System for securing messages recorded in an IP telephony network|
|US7174297 *||9 Mar 2001||6 Feb 2007||Bevocal, Inc.||System, method and computer program product for a dynamically configurable voice portal|
|US7194071 *||28 Dec 2000||20 Mar 2007||Intel Corporation||Enhanced media gateway control protocol|
|US7216287 *||2 Aug 2002||8 May 2007||International Business Machines Corporation||Personal voice portal service|
|US7240006 *||27 Sep 2000||3 Jul 2007||International Business Machines Corporation||Explicitly registering markup based on verbal commands and exploiting audio context|
|US7251602 *||27 Mar 2001||31 Jul 2007||Canon Kabushiki Kaisha||Voice browser system|
|US7263489||11 Jan 2002||28 Aug 2007||Nuance Communications, Inc.||Detection of characteristics of human-machine interactions for dialog customization and analysis|
|US7272415 *||21 Mar 2006||18 Sep 2007||Nec Infrontia Corporation||Telephone system enabling operation of a telephone set by way of a portable terminal|
|US7274672||29 Oct 2002||25 Sep 2007||Hewlett-Packard Development Company, L.P.||Data processing system and method|
|US7302391 *||25 May 2005||27 Nov 2007||Telesector Resources Group, Inc.||Methods and apparatus for performing speech recognition over a network and using speech recognition results|
|US7366766 *||23 Mar 2001||29 Apr 2008||Eliza Corporation||Web-based speech recognition with scripting and semantic objects|
|US7370086 *||14 Mar 2002||6 May 2008||Eliza Corporation||Web-based speech recognition with scripting and semantic objects|
|US7379872 *||17 Jan 2003||27 May 2008||International Business Machines Corporation||Method, apparatus, and program for certifying a voice profile when transmitting text messages for synthesized speech|
|US7379973 *||12 Jan 2001||27 May 2008||Voicegenie Technologies, Inc.||Computer-implemented voice application indexing web site|
|US7406657 *||22 Sep 2000||29 Jul 2008||International Business Machines Corporation||Audible presentation and verbal interaction of HTML-like form constructs|
|US7406658 *||13 May 2002||29 Jul 2008||International Business Machines Corporation||Deriving menu-based voice markup from visual markup|
|US7469210 *||23 Oct 2003||23 Dec 2008||Voice Signature Llc||Outbound voice signature calls|
|US7486664 *||17 Nov 2004||3 Feb 2009||Web Telephony, Llc||Internet controlled telephone system|
|US7512117 *||9 Aug 2004||31 Mar 2009||Web Telephony, Llc||Internet controlled telephone system|
|US7590538||31 Aug 1999||15 Sep 2009||Accenture Llp||Voice recognition system for navigating on the internet|
|US7593721 *||24 May 2006||22 Sep 2009||Nitesh Ratnakar||Method and apparatus for delivering geographical specific advertisements to a communication device|
|US7610016||4 Feb 2005||27 Oct 2009||At&T Mobility Ii Llc||System and method for providing an adapter module|
|US7627638||20 Dec 2004||1 Dec 2009||Google Inc.||Verbal labels for electronic messages|
|US7657013||29 Oct 2007||2 Feb 2010||Utbk, Inc.||Apparatus and method for ensuring a real-time connection between users and selected service provider using voice mail|
|US7698183||18 Jun 2003||13 Apr 2010||Utbk, Inc.||Method and apparatus for prioritizing a listing of information providers|
|US7720091||10 Jan 2006||18 May 2010||Utbk, Inc.||Systems and methods to arrange call back|
|US7729938||2 Jul 2007||1 Jun 2010||Utbk, Inc.||Method and system to connect consumers to information|
|US7752048||27 May 2005||6 Jul 2010||Oracle International Corporation||Method and apparatus for providing speech recognition resolution on a database|
|US7769591||31 Aug 2006||3 Aug 2010||White George M||Distributed voice user interface|
|US7792677 *||8 Sep 2005||7 Sep 2010||Fuji Xerox Co., Ltd.||Voice guide system and voice guide method thereof|
|US7831728||1 Nov 2006||9 Nov 2010||Citrix Systems, Inc.||Methods and systems for real-time seeking during real-time playback of a presentation layer protocol data stream|
|US7864929 *||10 Feb 2005||4 Jan 2011||Nuance Communications, Inc.||Method and systems for accessing data from a network via telephone, using printed publication|
|US7886009||20 Aug 2004||8 Feb 2011||Utbk, Inc.||Gate keeper|
|US7937439||27 Dec 2001||3 May 2011||Utbk, Inc.||Apparatus and method for scheduling live advice communication with a selected service provider|
|US7962842||9 Feb 2006||14 Jun 2011||International Business Machines Corporation||Method and systems for accessing data by spelling discrimination letters of link names|
|US7987092||8 Apr 2008||26 Jul 2011||Nuance Communications, Inc.||Method, apparatus, and program for certifying a voice profile when transmitting text messages for synthesized speech|
|US8015014 *||16 Jun 2006||6 Sep 2011||Storz Endoskop Produktions Gmbh||Speech recognition system with user profiles management component|
|US8024422||3 Apr 2008||20 Sep 2011||Eliza Corporation||Web-based speech recognition with scripting and semantic objects|
|US8036897||31 Aug 2006||11 Oct 2011||Smolenski Andrew G||Voice integration platform|
|US8041568 *||13 Oct 2006||18 Oct 2011||Google Inc.||Business listing search|
|US8073700||5 Jun 2006||6 Dec 2011||Nuance Communications, Inc.||Retrieval and presentation of network service results for mobile device using a multimodal browser|
|US8078469||22 Jan 2002||13 Dec 2011||White George M||Distributed voice user interface|
|US8131555 *||21 Mar 2000||6 Mar 2012||Aol Inc.||System and method for funneling user responses in an internet voice portal system to determine a desired item or service|
|US8145777||14 Jan 2005||27 Mar 2012||Citrix Systems, Inc.||Method and system for real-time seeking during playback of remote presentation protocols|
|US8166297||2 Jul 2008||24 Apr 2012||Veritrix, Inc.||Systems and methods for controlling access to encrypted data stored on a mobile device|
|US8171288||8 Aug 2005||1 May 2012||Imprivata, Inc.||System and method for authenticating users in a computer network|
|US8185646||29 Oct 2009||22 May 2012||Veritrix, Inc.||User authentication for social networks|
|US8204956 *||27 May 2008||19 Jun 2012||Genesys Telecomunications Laboratories, Inc.||Computer-implemented voice application indexing web site|
|US8233592 *||10 Nov 2003||31 Jul 2012||Nuance Communications, Inc.||Personal home voice portal|
|US8259911 *||1 Aug 2008||4 Sep 2012||Callwave Communications, Llc||Call processing and subscriber registration systems and methods|
|US8296147 *||7 Aug 2006||23 Oct 2012||Verizon Patent And Licensing Inc.||Interactive voice controlled project management system|
|US8335687||19 Mar 2012||18 Dec 2012||Google Inc.||Performing speech recognition over a network and using speech recognition results|
|US8340130||14 Jan 2005||25 Dec 2012||Citrix Systems, Inc.||Methods and systems for generating playback instructions for rendering of a recorded computer session|
|US8369311||1 Aug 2006||5 Feb 2013||Callwave Communications, Llc||Methods and systems for providing telephony services to fixed and mobile telephonic devices|
|US8370152||17 Jun 2011||5 Feb 2013||Nuance Communications, Inc.||Method, apparatus, and program for certifying a voice profile when transmitting text messages for synthesized speech|
|US8380516||27 Oct 2011||19 Feb 2013||Nuance Communications, Inc.||Retrieval and presentation of network service results for mobile device using a multimodal browser|
|US8396710||23 Nov 2011||12 Mar 2013||Ben Franklin Patent Holding Llc||Distributed voice user interface|
|US8401164||2 Jul 2008||19 Mar 2013||Callwave Communications, Llc||Methods and apparatus for providing expanded telecommunications service|
|US8401846||22 Feb 2012||19 Mar 2013||Google Inc.||Performing speech recognition over a network and using speech recognition results|
|US8447599||30 Dec 2011||21 May 2013||Google Inc.||Methods and apparatus for generating, updating and distributing speech recognition models|
|US8457970 *||13 Apr 2001||4 Jun 2013||Swisscom Ag||Voice portal hosting system and method|
|US8472592||9 Aug 2011||25 Jun 2013||Callwave Communications, Llc||Methods and systems for call processing|
|US8494848||13 Sep 2012||23 Jul 2013||Google Inc.||Methods and apparatus for generating, updating and distributing speech recognition models|
|US8510412||6 Sep 2011||13 Aug 2013||Eliza Corporation||Web-based speech recognition with scripting and semantic objects|
|US8520810||13 Sep 2012||27 Aug 2013||Google Inc.||Performing speech recognition over a network and using speech recognition results|
|US8527861||13 Apr 2007||3 Sep 2013||Apple Inc.||Methods and apparatuses for display and traversing of links in page character array|
|US8536976||11 Jun 2008||17 Sep 2013||Veritrix, Inc.||Single-channel multi-factor authentication|
|US8555066||6 Mar 2012||8 Oct 2013||Veritrix, Inc.||Systems and methods for controlling access to encrypted data stored on a mobile device|
|US8582728 *||4 May 2009||12 Nov 2013||Freddie B. Ross||Web-type audio information system using phone communication lines (audio net pages)|
|US8666032 *||30 Apr 2012||4 Mar 2014||Intellisist, Inc.||System and method for processing call records|
|US8682663||21 Jun 2013||25 Mar 2014||Google Inc.||Performing speech recognition over a network and using speech recognition results based on determining that a network connection exists|
|US8718243||29 Aug 2012||6 May 2014||Callwave Communications, Llc||Call processing and subscriber registration systems and methods|
|US8725791||3 May 2010||13 May 2014||Citrix Systems, Inc.||Methods and systems for providing a consistent profile to overlapping user sessions|
|US8731937||26 Oct 2012||20 May 2014||Google Inc.||Updating speech recognition models for contacts|
|US8750469||20 Jun 2013||10 Jun 2014||Callwave Communications, Llc||Methods and systems for call processing|
|US8751957 *||22 Nov 2000||10 Jun 2014||Pace Micro Technology Plc||Method and apparatus for obtaining auditory and gestural feedback in a recommendation system|
|US8781840||31 Jan 2013||15 Jul 2014||Nuance Communications, Inc.||Retrieval and presentation of network service results for mobile device using a multimodal browser|
|US8818809||20 Jun 2013||26 Aug 2014||Google Inc.||Methods and apparatus for generating, updating and distributing speech recognition models|
|US8831185||29 Jun 2012||9 Sep 2014||Nuance Communications, Inc.||Personal home voice portal|
|US8831930||27 Oct 2010||9 Sep 2014||Google Inc.||Business listing search|
|US8831951||13 Nov 2009||9 Sep 2014||Google Inc.||Verbal labels for electronic messages|
|US8837698||10 Apr 2007||16 Sep 2014||Yp Interactive Llc||Systems and methods to collect information just in time for connecting people for real time communications|
|US8838476||22 Oct 2007||16 Sep 2014||Yp Interactive Llc||Systems and methods to provide information and connect people for real time communications|
|US8843376||13 Mar 2007||23 Sep 2014||Nuance Communications, Inc.||Speech-enabled web content searching using a multimodal browser|
|US8874446||5 Mar 2012||28 Oct 2014||Mercury Kingdom Assets Limited||System and method for funneling user responses in an internet voice portal system to determine a desired item or servicebackground of the invention|
|US8934614||27 May 2008||13 Jan 2015||YP Interatcive LLC||Systems and methods for dynamic pay for performance advertisements|
|US9002712||1 Aug 2005||7 Apr 2015||Dialsurf, Inc.||Voice-interactive marketplace providing promotion and promotion tracking, loyalty reward and redemption, and other features|
|US9060063||5 Mar 2013||16 Jun 2015||Yellowpages.Com Llc||Method and system to connect consumers to information|
|US9087507 *||15 Nov 2006||21 Jul 2015||Yahoo! Inc.||Aural skimming and scrolling|
|US9106473||28 Mar 2007||11 Aug 2015||Yellowpages.Com Llc||Systems and methods to connect buyers and sellers|
|US20010043684 *||2 Mar 2001||22 Nov 2001||Mobilee, Inc.||Telephone and wireless access to computer network-based audio|
|US20010049604 *||27 Mar 2001||6 Dec 2001||Fumiaki Ito||Voice Browser system|
|US20010055370 *||13 Apr 2001||27 Dec 2001||Kommer Robert Van||Voice portal hosting system and method|
|US20020002463 *||23 Mar 2001||3 Jan 2002||John Kroeker||Web-based speech recognition with scripting and semantic objects|
|US20020059398 *||14 Nov 2001||16 May 2002||Moriaki Shimabukuro||Voice banner advertisement system and voice banner advertisement method|
|US20020072918 *||22 Jan 2002||13 Jun 2002||White George M.||Distributed voice user interface|
|US20020133402 *||13 Mar 2001||19 Sep 2002||Scott Faber||Apparatus and method for recruiting, communicating with, and paying participants of interactive advertising|
|US20020133571 *||6 Dec 2001||19 Sep 2002||Karl Jacob||Apparatus and method for specifying and obtaining services through an audio transmission medium|
|US20020138262 *||14 Mar 2002||26 Sep 2002||John Kroeker||Web-based speech recognition with scripting and semantic objects|
|US20020143553 *||24 Jan 2001||3 Oct 2002||Michael Migdol||System, method and computer program product for a voice-enabled universal flight information finder|
|US20040128136 *||22 Sep 2003||1 Jul 2004||Irani Pourang Polad||Internet voice browser|
|US20040143438 *||17 Jan 2003||22 Jul 2004||International Business Machines Corporation||Method, apparatus, and program for transmitting text messages for synthesized speech|
|US20040204938 *||4 May 2004||14 Oct 2004||Wolfe Gene J.||System and method for network based transcription|
|US20040205579 *||13 May 2002||14 Oct 2004||International Business Machines Corporation||Deriving menu-based voice markup from visual markup|
|US20050025133 *||9 Aug 2004||3 Feb 2005||Robert Swartz||Internet controlled telephone system|
|US20050038686 *||27 Sep 2004||17 Feb 2005||Lauffer Randall B.||Method and system to connect consumers to information|
|US20050044238 *||1 Oct 2004||24 Feb 2005||Karl Jacob||Method and system to connect consumers to information|
|US20050049854 *||8 Oct 2004||3 Mar 2005||Craig Reding||Methods and apparatus for generating, updating and distributing speech recognition models|
|US20050074102 *||6 Oct 2003||7 Apr 2005||Ebbe Altberg||Method and apparatus to provide pay-per-call performance based advertising|
|US20050074104 *||17 Nov 2004||7 Apr 2005||Web Telephony Llc||Internet controlled telephone system|
|US20050091057 *||14 Dec 2001||28 Apr 2005||General Magic, Inc.||Voice application development methodology|
|US20050100142 *||10 Nov 2003||12 May 2005||International Business Machines Corporation||Personal home voice portal|
|US20050119957 *||18 Jun 2003||2 Jun 2005||Scott Faber||Method and apparatus for prioritizing a listing of information providers|
|US20050129196 *||15 Dec 2003||16 Jun 2005||International Business Machines Corporation||Voice document with embedded tags|
|US20050160083 *||29 Jun 2004||21 Jul 2005||Yahoo! Inc.||User-specific vertical search|
|US20050180401 *||10 Feb 2005||18 Aug 2005||International Business Machines Corporation||Method and systems for accessing data from a network via telephone, using printed publication|
|US20050197168 *||14 Feb 2005||8 Sep 2005||Holmes David W.J.||System and method for providing an adapter module|
|US20050202853 *||4 Feb 2005||15 Sep 2005||Schmitt Edward D.||System and method for providing an adapter module|
|US20050216273 *||25 May 2005||29 Sep 2005||Telesector Resources Group, Inc.||Methods and apparatus for performing speech recognition over a network and using speech recognition results|
|US20050216341 *||10 Mar 2005||29 Sep 2005||Anuj Agarwal||Methods and apparatuses for pay-per-call advertising in mobile/wireless applications|
|US20050216345 *||28 Mar 2005||29 Sep 2005||Ebbe Altberg||Methods and apparatuses for offline selection of pay-per-call advertisers|
|US20050234730 *||15 Jun 2005||20 Oct 2005||Wolfe Gene J||System and method for network based transcription|
|US20050261907 *||27 Jul 2005||24 Nov 2005||Ben Franklin Patent Holding Llc||Voice integration platform|
|US20050273866 *||8 Aug 2005||8 Dec 2005||Saflink Corporation||System and method for authenticating users in a computer network|
|US20060020508 *||23 Jul 2004||26 Jan 2006||Gorti Sreenivasa R||Proxy-based profile management to deliver personalized services|
|US20080086303 *||15 Nov 2006||10 Apr 2008||Yahoo! Inc.||Aural skimming and scrolling|
|US20090279678 *||12 Nov 2009||Ross Freddie B||Web-type audio information system using phone communication lines (audio Net pages)|
|US20120219126 *||30 Aug 2012||Gilad Odinak||System And Method For Processing Call Records|
|WO2005069903A2 *||14 Jan 2005||4 Aug 2005||Evan Robinson||User-specific vertical search|
|U.S. Classification||379/88.02, 379/88.17|
|Cooperative Classification||H04L67/02, H04M2201/405, H04M3/4938|
|European Classification||H04M3/493W, H04L29/08N1|
|20 Sep 2002||AS||Assignment|
|5 Dec 2005||FPAY||Fee payment|
Year of fee payment: 4
|7 Apr 2006||AS||Assignment|
Owner name: USB AG, STAMFORD BRANCH,CONNECTICUT
Free format text: SECURITY AGREEMENT;ASSIGNOR:NUANCE COMMUNICATIONS, INC.;REEL/FRAME:017435/0199
Effective date: 20060331
|24 Aug 2006||AS||Assignment|
Owner name: USB AG. STAMFORD BRANCH,CONNECTICUT
Free format text: SECURITY AGREEMENT;ASSIGNOR:NUANCE COMMUNICATIONS, INC.;REEL/FRAME:018160/0909
Effective date: 20060331
|20 Nov 2009||FPAY||Fee payment|
Year of fee payment: 8
|6 Nov 2013||FPAY||Fee payment|
Year of fee payment: 12