Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS8201087 B2
Publication typeGrant
Application numberUS 12/023,903
Publication date12 Jun 2012
Filing date31 Jan 2008
Priority date1 Feb 2007
Also published asUS20080189605, US20120254744, WO2008095153A2, WO2008095153A3
Publication number023903, 12023903, US 8201087 B2, US 8201087B2, US-B2-8201087, US8201087 B2, US8201087B2
InventorsDavid Kay, Erland Unruh, Gaurav Tandon
Original AssigneeTegic Communications, Inc.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Spell-check for a keyboard system with automatic correction
US 8201087 B2
Abstract
An adaptation of standard edit distance spell-check algorithms leverages probability-based regional auto-correction algorithms and data structures for ambiguous keypads and other predictive text input systems to provide enhanced typing correction and spell-check features. Strategies for optimization and for ordering results of different types are also provided.
Images(11)
Previous page
Next page
Claims(32)
1. A text input apparatus, comprising:
a user input device comprising an auto-correcting keyboard region comprising a plurality of the characters of an alphabet, wherein each of the plurality of characters corresponds to a location with known coordinates in the auto-correcting keyboard region, wherein each time a user contacts the user input device within the auto-correcting keyboard region, a location associated with the user contact is determined and the determined contact location is added to a current input sequence of contact locations;
a memory containing a plurality of objects, wherein each object is further associated with a frequency of use, and wherein each of the plurality of objects in memory is further associated with one or a plurality of predefined groupings of objects;
an output device with a text display area; and
a processor that receives as an input sequence a series of one or more of said input values from said input device;
said processor programmed to execute a set-edit-distance algorithm to calculate a matching metric between said input sequence and a candidate string, to access a database to retrieve one or more candidate strings, to calculate said set-edit-distance between said input sequence and a candidate string by comparing sets of possible characters corresponding to said input sequence with characters in the candidate string, and to output one or more candidate strings ranked by matching metric;
wherein if a candidate string character is in a set of said sets of possible characters per input, set-edit-distance does not increase; and
wherein if a candidate string character is not in said set of possible characters per input, set-edit-distance is increased according to a predetermined rule,
said processor coupled to the user input device, memory, and output device.
2. A text input apparatus, comprising:
a user input device in which the user's input is ambiguous, said input device comprising an auto-correcting keyboard region comprising a plurality of the characters of an alphabet, wherein each of the plurality of characters corresponds to a location with known coordinates in the auto-correcting keyboard region, wherein each time a user contacts the user input device within the auto-correcting keyboard region, a location associated with the user contact is determined and the determined contact location is added to a current input sequence of contact locations;
a memory containing a vocabulary database comprising a dictionary containing entries;
an output device with a text display area; and
a processor that receives as an input sequence a series of one or more of said input values from said input device;
said processor programmed to execute a set-edit-distance algorithm to calculate a matching metric between said input sequence and a candidate string, to access a database to retrieve one or more candidate strings, to calculate said set-edit-distance between said input sequence and a candidate string by comparing said sets of possible characters corresponding to said input sequence with characters in the candidate string, and to output one or more candidate strings ranked by matching metric;
wherein if a candidate string character is in a set of said sets of possible characters per input, set-edit-distance does not increase; and
wherein if a candidate string character is not in said set of possible characters per input, set-edit-distance is increased according to a predetermined rule,
said processor coupled to the user input device, memory, and output device.
3. The apparatus of claim 2, wherein said processor performs incremental filtering and regional/probability calculations on said list of candidate matches.
4. The apparatus of claim 3, wherein if there is insufficient information from the dictionary and/or user inputs, then a current input is rejected for lack of information to make a determination and the user continues with an input sequence.
5. The apparatus of claim 4, wherein if there is sufficient information to proceed, then the results for the input sequence and dictionary inputs are compared with other top matches in a word choice list and a word is discarded if it is ranked too low on the list.
6. The apparatus of claim 5, wherein if there is sufficient information to proceed, then a lowest-ranked word in said list is dropped if the list is full, and a word is inserted into the list based on ranking.
7. The apparatus of claim 2, wherein said set-edit-distance algorithm assigns one discrete value to a word no matter what single letter in the word is wrong or how wrong it is.
8. The apparatus of claim 2, wherein said set-edit-distance algorithm uses word frequency as a factor in determining a word match.
9. The apparatus of claim 2, wherein said set-edit-distance algorithm uses regional-correction probabilities to determine a most likely word given the rest of the letter matches and/or word frequency.
10. The apparatus of claim 2, said processor further comprising:
a component for tuning the ordering of words in the selection list to mirror the intent or entry style of the user.
11. The apparatus of claim 10, said tuning component either providing results that emphasize regional aspects of user input or that emphasize word completions based on an input sequence so far.
12. The apparatus of claim 2, said set-edit-distance algorithm identifying a shortest edit distance comprising an interpretation that minimizes differences between user inputs and a target word.
13. The apparatus of claim 2, said set-edit-distance algorithm scoring a pair of transposed letters the same as a single-letter replacement error, rather than as two independent errors.
14. The apparatus of claim 2, said processor further implementing a regional correction algorithm that is executed in connection with said set-edit-distance algorithm to compare an input sequence against each vocabulary database entry, wherein user inputs are ambiguous and comprise a set of one or more (letter+probability) pairs, wherein said probability reflects a likelihood that an identified letter is what the user intended, said probability determined by said processor based upon one or more of the following:
Cartesian distance from a stylus tap location to a center of each adjacent letter on a keyboard displayed on a touch screen, the frequency of the adjacent letter, and/or the distribution of taps around each letter;
radial distance between a joystick tilt direction to assigned pie slices of nearby letters of the alphabet;
a degree of similarity between a handwritten letter and a set of possible letter shapes/templates; and
probability that a letter/grapheme is represented in a phoneme or full-word utterance processed by a speech recognition front-end.
15. The apparatus of claim 2, wherein edit distance is applied to ambiguous sets, and wherein penalties are assigned to each difference between an entered and a target vocabulary word.
16. The apparatus of claim 2, said processor executing said set-edit-distance algorithm as follows:
if there are two possible transformations that result in a match, choose the one with the lowest edit distance;
if a letter is in an input's probability set, also calculate a regional-correction probability score for that letter;
accumulate all regional-correction probability scores for all letters in a word to calculate a spell correction tap frequency; and
for 0 edit-distance words having the same word length and each letter in the vocabulary word is present in the inputs' probability sets, use only single tap probabilities.
17. The apparatus of claim 2, said processor executing said set-edit-distance algorithm as follows:
calculating or accumulating values for matching and word list ordering using any of the following:
edit distance;
tap frequency;
stem edit distance;
word frequency; and
source.
18. The apparatus of claim 17, wherein tap frequency is calculated as:

probability of letter 1*probability of letter 2* . . . probability of letter n.
19. The apparatus of claim 2, further comprising said processor executing any of the following optimizations:
discarding a possible word match by allowing only one edit/correction for every three actual inputs, up to a maximum of three edits against any one compared word;
minimizing edit distance calculations, where first pass calculating cells which may allow a comparison are rejected entirely;
starting from the results of a previous pass or temporarily whittling down a previous word list, until the user pauses entry;
providing levels of filtering before, during, or after edit distance matrix calculations are completed, comprising any of:
first letter exact, otherwise withdraw target word from consideration;
first letter near-miss, regional, in probability set;
a first letter of a vocabulary word must match one of the first two inputs to allow one add, one drop, or one transposed pair;
a first letter of a vocabulary word must be in a probability set of one of the first two inputs; and
no filtering.
20. The apparatus of claim 2, wherein word Frequency is approximated based on Zipf's Law.
21. The apparatus of claim 2, further comprising said processor programmed for executing configuration parameters that include any of:
number of word completions per near-miss section;
number of spell corrections; and
spell correction either regional/probability set or classic edit distance binary.
22. The apparatus of claim 2, further comprising said processor programmed for determining word list sort order based on factors comprising any of regional probability, edit distance, word recency/frequency, as stored in each database, word length, stem edit distance, and/or which of two or more different list profiles or strategies is being used.
23. The apparatus of claim 2, further comprising said processor programmed for executing a word ordering based upon the following determinations in order:
full word always comes before word completion;
source dictionary;
edit distance;
stem edit distance; and
frequency.
24. The apparatus of claim 2, further comprising said processor programmed for executing a word ordering based upon the following determinations in order:
stem edit distance;
word completion or not;
source;
edit distance; and
frequency.
25. The apparatus of claim 2, further comprising said processor programmed for executing one or more auto-substitution macros, comprising any of:
if an input sequence approximately matches both a shortcut and a stem of expanded text, ranking of the macro may be increased; and
if a word in a mobile message is text slang or misspelled, find a valid sponsored keyword.
26. The apparatus of claim 2, further comprising said processor programmed for using a spell-corrected word choice as a basis for further inputs and word completions.
27. The apparatus of claim 2, further comprising said processor programmed for ambiguous entry for search and discovery, wherein if the user's input sequence is not closely matched by content of a mobile device, one or more spell-corrected interpretations which do result in matches are offered.
28. A text input method, comprising the steps of:
using for text input a user input device in which the user's input is ambiguous, said input device comprising an auto-correcting keyboard region comprising a plurality of the characters of an alphabet, wherein each of the plurality of characters corresponds to a location with known coordinates in the auto-correcting keyboard region, wherein each time a user contacts the user input device within the auto-correcting keyboard region, a location associated with the user contact is determined and the determined contact location is added to a current input sequence of contact locations;
providing a memory containing a vocabulary database comprising a dictionary containing entries;
providing an output device with a text display area; and
providing a processor that receives as an input sequence a series of one or more of said input values from said input device;
said processor programmed to execute a set-edit-distance algorithm to calculate a matching metric between said input sequence and a candidate string, to access a database to retrieve one or more candidate strings, to calculate said set-edit-distance between said input sequence and a candidate string by comparing sets of possible characters corresponding to said input sequence with characters in the candidate string, and to output one or more candidate strings ranked by matching metric;
wherein if a candidate string character is in a set of said sets of possible characters per input, set-edit-distance does not increase; and
wherein if a candidate string character is not in said set of possible characters per input, set-edit-distance is increased according to a predetermined rule,
coupling said processor to the user input device, memory, and output device.
29. A predictive text input apparatus, comprising:
an input device that produces a set of characters comprising an output string in response to user operation thereof, each said user operation corresponding to an intended meaning for said output string and having an actual, ambiguous meaning;
a processor that receives as an input word said output string from said input device and that is programmed to execute a set-edit-distance algorithm that turns said input word into an output word that best matches said intended meaning;
said processor programmed to access a database to retrieve one or more target words and to use a matrix to determine said set-edit-distance between said input word and a target word by comparing said set of characters in the input word with each character in the target word;
wherein if a target character is in said input set of characters, set-edit-distance does not increase; and
wherein if a target character is not in said input set of characters, set-edit-distance is increased according to a predetermined rule.
30. The apparatus of claim 29, said processor programmed to use character probabilities in determining a final set-edit-distance score.
31. A text input apparatus, comprising:
an input device that produces an input in response to user operation thereof, each said user operation corresponding to an intended meaning for said input and wherein at least one said input corresponds to a set comprising a plurality of possible characters;
a processor that receives as an input sequence a series of one or more of said input values from said input device;
said processor programmed to execute a set-edit-distance algorithm to calculate a matching metric between said input sequence and a candidate string, to access a database to retrieve one or more candidate strings, to calculate said set-edit-distance between said input sequence and a candidate string by comparing sets of possible characters corresponding to said input sequence with characters in the candidate string, and to output one or more candidate strings ranked by matching metric;
wherein if a candidate string character is in a set of said sets of possible characters per input, set-edit-distance does not increase; and
wherein if a candidate string character is not in said set of possible characters per input, set-edit-distance is increased according to a predetermined rule.
32. The apparatus of claim 31, said processor programmed to use the probabilities of said possible characters in determining a final set-edit-distance score.
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. provisional patent application Ser. No. 60/887,748, filed 1 Feb. 2007, the entirety of which is incorporated herein by this reference thereto.

BACKGROUND OF THE INVENTION

1. Technical Field

The invention relates to data input devices. More particularly, the invention relates to a spell-check mechanism for a keyboard system having automatic correction capability.

2. Description of the Prior Art

Classic spell-check (“Edit Distance”) techniques for transposed/added/dropped characters have a relatively long history. See, for example, Kukich, K., Techniques for Automatically Correcting Words, ACM Computing Surveys, Vol. 24, No. 4 (December 1992); Peterson, J. L., Computer Programs for Detecting and Correcting Spelling Errors, The Communications of the ACM, Volume 23, No. 12 (December 1980); and Daciuk, J., Spelling Correction in the paper Incremental Construction of Finite-State Automata and Transducers, and their Use in the Natural Language Processing (1998).

But classic spell-check techniques can only handle a certain number of differences between the typed word and the intended correct word. Because the best correction candidate is presumed to be the one with the fewest changes, spell-check algorithms are confounded by, for example, unknowing shifting of fingers by a typist on the keyboard, or tapping on a touchscreen keyboard hurriedly and inaccurately, and thus typing almost every letter wrong.

To limit the amount of computational processing, particularly on lower-performance mobile devices, implementations of the classic algorithms make assumptions or impose constraints to reduce the ambiguity and thus the number of candidate words being considered. For example, they may rely on the initial letters of the word being correct or severely limit the size of the vocabulary.

Another form of automatic error correction, useful both for keyboards on touch-sensitive surfaces and for standard phone keypads, calculates the distances between each input location and nearby letters and compares the entire input sequence against possible words. The word whose letters are the closest to the input locations, combined with the highest frequency and/or recency of use of the word, is the best correction candidate. This technique easily corrects both shifted fingers and hurried tapping. It can also offer reasonable word completions even if the initial letters are not all entered accurately.

The following patent publications describe the use of a “SloppyType” engine for disambiguating and auto-correcting ambiguous keys, soft keyboards, and handwriting recognition systems: Robinson; B. Alex, Longe; Michael R., Keyboard System With Automatic Correction, U.S. Pat. No. 6,801,190 (Oct. 5, 2004), U.S. Pat. No. 7,088,345 (Aug. 8, 2006), and U.S. Pat. No. 7,277,088 (Oct. 2, 2007); Robinson et al, Handwriting And Voice Input With Automatic Correction, U.S. Pat. No. 7,319,957 (Jan. 15, 2008), and U.S. patent application Ser. No. 11/043,525 (filed Jan. 25, 2005). See also, Vargas; Garrett R., Adjusting keyboard, U.S. Pat. No. 5,748,512 (May 5, 1998).

In addition, the following publications cover combinations of manual and vocal input for text disambiguation: Longe, et al., Multimodal Disambiguation of Speech Recognition, U.S. patent application Ser. No. 11/143,409 (filed Jun. 1, 2005); and Stephanick, et al, Method and Apparatus Utilizing Voice Input to Resolve Ambiguous Manually Entered Text Input, U.S. patent application Ser. No. 11/350,234 (filed Feb. 7, 2006).

The “SloppyType” technology referenced above uses distance-based error correction on full words. Assuming that the length of the input sequence equals the length of the intended word and that each input location is in the proper order helps compensate for the increased ambiguity introduced by considering multiple nearby letters for each input. But in addition to minor targeting errors, people also transpose keys, double-tap keys, miss a key completely, or misspell a word when typing.

It would be advantageous to provide a mechanism for addressing all forms of typing errors in a way that offers both accurate corrections and acceptable performance.

SUMMARY OF THE INVENTION

An embodiment of the invention provides improvements over standard edit distance spell-check algorithms by incorporating probability-based regional auto-correction algorithms and data structures. An embodiment of the invention provides helpful word completions in addition to typing corrections. The invention also provides strategies for optimization and for ordering results of different types. Many embodiments of the invention are particularly well suited for use with ambiguous keypads, reduced QWERTY keyboards, and other input systems for mobile devices.

The careful combination of edit distance techniques with regional auto-correction techniques creates new, even-better results for the user. This, an incorrectly typed word can be corrected to the intended word, or a word completion can be offered, regardless of the kind of typing error. Text entry on the ubiquitous phone keypad, already aided by input disambiguation systems, is further enhanced by the ability to correct typing errors. A series of optimizations in retrieval, filtering, and ranking keep the ambiguity manageable and the processing time within required limits.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a flow diagram of a spell-check method for a keyboard having automatic correction according to the invention;

FIG. 2 is a hardware block diagram of an input system input system having spell-check and automatic correction according to the invention;

FIG. 3 is a table showing standard edit-distance calculation between an input word and a target word using a matrix as a tool;

FIG. 4 is a table illustrating set-edit-distance calculation for input on a 12-key mobile phone according to the invention;

FIGS. 5A-5C are illustrations for explaining the concepts of stem edit-distance and stem set-edit-distance according to the invention;

FIG. 6 is a flow diagram showing the steps for performing set-edit-distance calculations and incremental filtering to identify a candidate word according to the invention;

FIG. 7 is a matrix showing an example for the word “misspell” using standard edit-distance;

FIG. 8 is a matrix showing how to find standard edit-distance values based on the cell that is being calculated;

FIG. 9 is a matrix showing when the compared words whose stems fully match;

FIGS. 10A-10B are a series of matrices showing incremental calculation when there is a mismatch between the words being compared;

FIG. 11 shows a rotated/transformed matrix space according to the invention;

FIG. 12 shows how to find standard edit-distance values for the rotated matrix of FIG. 11 according to the invention;

FIG. 13 is a table showing the union of adjacent input sets for an LDB retrieval screening function according to the invention;

FIG. 14 is a length independent screening map for input length 9 according to the invention;

FIG. 15 is a length dependent screening map for target word of length 6 and input length 9 according to the invention; and

FIG. 16 is a series of screen diagrams showing set-edit-distance spell correction with regional auto-correction according to the invention.

DETAILED DESCRIPTION OF THE INVENTION Glossary

For purposes of the discussion herein, the following terms have the meaning associated therewith:

Edit Distance (also “standard” E.D.)—the well-documented algorithm to compare two strings and determine the minimum number of changes necessary to make one the same as the other.

The following abbreviations may be used herein and in the Figures:

  • T—Transposed (two sequential letters swapped);
  • I—Inserted (add a letter that wasn't in the other string);
  • D—Deleted (drop an extra letter from one string);
  • S—Substituted (replace a letter with another at the same position);
  • X—the target cell being calculated.

Enhanced Edit Distance, or Set-Edit-Distance (or “fuzzy compare”)—the subject of this patent; improved E.D. using a set of letters (with optional probabilities for each) to represent each input rather than a single letter as in standard E.D., plus other optimizations.

Mode—an operational state; for this invention, 1 of 2 states, “exact” (only using the exact-tap letter/value from each input event to match each candidate word, as with standard E.D.) or “regional” “set-based” (using multiple letters/values per input); the mode may be either user- or system-specified.

Regional input—a method (or event) including nearby/surrounding letters (with optional probabilities) in addition to the letter/key actually tapped/pressed.

Set-based—the use of multiple character values, rather than just one, to represent each input; each set member may have a different relative probability; a set may also include, e.g. the accented variations of the base letter shown on a key.

“Classic compare”, “classic match,” SloppyType, or “regional correction”—full-word matching using auto-correction considering nearby letters, supra; generally, the number of inputs equals the number of letters in each candidate word (or word stem of a completed word).

Filter or Screen—a rule for short-circuiting the full comparison or retrieval process by identifying and eliminating words that ultimately are not added to the selection list anyway.

KDB—Keyboard Database; the information about the keyboard layout, level of ambiguity surrounding each letter, and nearby letters for each letter.

LDB—Linguistic Database, i.e. main vocabulary for a language. “word tap frequency”—the contribution of physical distance from pressed keys to the likelihood the word is the target word.

Discussion

An embodiment of the invention provides an adaptation of standard edit distance spell-check algorithms that works with probability-based auto-correction algorithms and data structures for ambiguous keypads and other predictive text input systems. The invention also provides strategies for optimization and for ordering results of different types.

FIG. 1 is a flow diagram of a spell-check method for a keyboard having automatic correction according to the invention. FIG. 1 shows a user/input comprising an input sequence that is entered by the user via a data entry device (105), in which the user's input may be ambiguous. At least one dictionary (115) is also provided as a source of target meanings for the user's entry. Upon each user input event (100) the user input sequence is provided to the inventive system. Each source (110), such as the dictionary (115) discussed above, is queried. Potentially every word (120) in each dictionary is supplied, in turn, as input to the inventive system upon each user input event.

Upon receiving these inputs, the system performs incremental filtering and edit distance and regional/probability calculations (130), discarding any word that does not meet minimum thresholds for similarity with the inputs. Then the system compares the results for the input sequence and dictionary inputs with other top matches in a word choice list and discards the word if it is ranked too low on the list (140). The lowest-ranked word in the list is dropped if the list is full, and the word is inserted into the list based on ranking (150). The list is then presented to the user.

FIG. 2 is a hardware block diagram of an input system having spell-check and automatic correction 200 according to the invention. The input device 202 and the display 203 are coupled to a processor 201 through appropriate interfacing circuitry. Optionally, a speaker 204 is also coupled to the processor. The processor 201 receives input from the input device, and manages all output to the display and speaker. Processor 201 is coupled to a memory 210. The memory includes a combination of temporary storage media, such as random access memory (RAM), and permanent storage media, such as read-only memory (ROM), floppy disks, hard disks, or CD-ROMs. Memory 210 contains all software routines to govern system operation. Preferably, the memory contains an operating system 211, correction software 212, including software for calculating edit distance and performing spell checking, inter alia, and associated vocabulary modules 213 that are discussed in additional detail herein. Optionally, the memory may contain one or more application programs 214, 215, 216. Examples of application programs include word processors, software dictionaries, and foreign language translators. Speech synthesis software may also be provided as an application program, allowing the input system having full correction capabilities to function as a communication aid.

Edit Distance Combined with Regional Correction

Edit-Distance is the number of operations required to turn one string into another string. Essentially, this is the number of edits one might have to make, e.g. manually with a pen, to fix a misspelled word. For example, to fix an input word “ressumt” to a target word “result”, two edits must be made: an ‘s’ must be removed, and the ‘m’ must be changed to an ‘l’. Thus, “result” is edit-distance 2 from “ressumt”.

A common technique to determine the edit-distance between an input word and a target word uses a matrix as a tool. (See FIG. 3.) The approach compares characters in the input word with characters in the target word, and gives the total edit-distance between the words at the bottom-right-most element of the matrix. The details of the calculation are complex, but in general the edit-distance (represented by the number in the diagonal elements) increases as the portions of the words start to look dissimilar (and smaller value means more similar). Working across the matrix from upper left to lower right, if a character in the target word is the same as the character in the input word, edit-distance does not increase. If the character in the target word is not the same, the edit-distance increases according to a standard rule. The end result, the total edit-distance, is the bottom-right-most element (bold outline).

That idea is now extended to ambiguous input where each input corresponds to a set of characters rather than single characters. One example of this is a text entry system on a mobile phone that allows a user to press keys corresponding to the characters the user wants to input, with the system resolving the ambiguity inherent in the fact that keys have multiple characters associated with them. The new term “Set-Edit-Distance” refers to the extension of the edit-distance idea to ambiguous input. To illustrate set-edit-distance, suppose that a user of a mobile phone text entry system presses the key (7,3,7,7,8,6,8) while attempting to enter the word ‘result.’ Spell correction on this ambiguous system looks for words that have the smallest set-edit-distance to the input key sequence. The technique is similar to that for edit-distance, but instead of comparing a character in the target word to a character in the input sequence, the character in the target word is compared against a set of characters represented by the input key. If the target character is in the input set, the set-edit-distance does not increase. If the target character is not in the input set, the set-edit-distance does increase according to a standard rule. A matrix corresponding to set-edit-distance is shown in FIG. 4, with the result in the bottom-right-most element (bold outline).

The example in FIG. 4 uses key input on a mobile phone to illustrate the concept of set-edit-distance, but this idea applies to other ambiguous systems as well, such as a set of characters surrounding a pressed key on a QWERTY keyboard, or a set of characters returned from a character recognition engine. Also, the example above assumes that the characters in the set are all of equal likelihood, but the system can be extended to incorporate character probabilities in the final set-edit-distance score.

In such an extended system, the input sequence may be represented as an array of one or more character+probability pairs. The probability reflects the likelihood that the character identified by the system is what the user intended. As described in Robinson et al, Handwriting And Voice Input With Automatic Correction, U.S. Pat. No. 7,319,957 (Jan. 15, 2008) and Robinson, et al., Handwriting And Voice Input With Automatic Correction, U.S. patent application Ser. No. 11/043,525 (filed Jan. 25, 2005), each of which is incorporated herein in its entirety by this reference thereto. The probability may be based upon one or more of the following:

    • The Cartesian distance from a stylus or finger tap location to the center of each adjacent letter on a keyboard displayed on a touch screen, the frequency of the adjacent letter, and/or the distribution of taps around each letter;
    • The radial distance between a joystick tilt direction to the assigned pie slices of nearby letters of the alphabet;
    • a The degree of similarity between the handwritten letter and a set of possible letter shapes/templates, e.g., the “ink trail” looks most like the letter ‘c’ (60% probability), but may be other letters as well, such as ‘o’ (20%), ‘e’ (10%), ‘a’ (10%); and
    • The probability that a letter/grapheme is represented in a phoneme or full-word utterance processed by a speech recognition front-end.

Therefore, set-edit-distance is the standard edit distance applied to ambiguous sets, where penalties are assigned to each difference between an entered and a target vocabulary word. Instead of asking “Is this letter different?” the invention asks “Is this letter one of the possible candidates in the probability set?”

Thus, an embodiment applies the following algorithm:

    • If there are two possible transformations that result in a match, choose the one with the lowest edit distance.
    • If the letter is in the input's probability set, also calculate the regional-correction probability score for that letter.
    • Accumulate all regional-correction probability scores for all letters in the word to calculate the spell correction tap frequency.
    • For zero-set-edit-distance words, i.e. same word length and each letter in the vocabulary word is present in the inputs' probability sets, only the tap frequencies are used.

A number of values are calculated or accumulated for the matching and word list ordering steps:

  • 1. Set-edit-distance;
  • 2. Tap frequency;
  • 3. Stem edit-distance;
  • 4. Word frequency; and
  • 5. Source, e.g. dictionary.

Tap frequency (TF) of the word or stem may be calculated as:
TF=probability of letter 1*probability of letter 2* . . . probability of letter n.  (1)

This is similar to the standard probability set auto-correction calculations, but where the edit distance algorithm creates alternatives then the largest calculated frequency among these alternatives is chosen.

The example in FIG. 4 uses a comparison between a set-based input sequence and an entire target word. This idea can also be applied to compare a set of inputs against the beginning (stem) of a target word. This allows the system to start to predict spell corrections before the user has entered the entire input sequence. This is called stem set-edit-distance. FIGS. 5A-5B illustrate partial input sequences. In these figures, letters ‘a’ and ‘s’ may be members of the same set based on physical proximity on a touchscreen QWERTY keyboard, whereas ‘s’ and ‘g’ are not. Because the letter ‘s’ in the third position of the target word is in the set for the third input in “(a, s, z)” FIG. 5A, the stem set-edit-distance between the input and target word is zero. Because the third letter ‘s’ is not in the same set for the third input in “(g, h, b)” FIG. 5B, the stem set-edit-distance between the input and target word is one.

Stem edit-distance is an edit distance value for the explicitly-entered or most probable characters, commonly the exact-tap value from each input probability set, compared with the corresponding letters of a longer target word. In this case, the most probable character from each input for a touchscreen QWERTY keyboard is the exact-tap letter. Because the letter ‘s’ in the third position of the target word is not the same as the exact-tap value for the third input in FIG. 5A, the stem edit-distance between the input and target word is one. Similarly, the stem edit-distance between the input and target word in FIG. 5B is also one.

The sets for stem set-edit-distance can also be language specific. For example, accented variants of a character in French may be members of the same set. FIG. 5C illustrates an example where variants of ‘e’ map to the same key, resulting in a stem set-edit-distance of zero between the input and target word.

An embodiment of the invention also provides a number of innovative strategies for tuning the ordering of words in the selection list to mirror the intent or entry style of the user. For example, the results may be biased in one of two ways:

    • Full-Word Priority—for a poor keyboard, e.g. crowded or with low tactile feedback, and/or a fast or sloppy typist, the results emphasize regional, i.e. near-miss, corrections of all inputs and fewer word completions; and
    • Completions Promoted—for a good/accurate keyboard, and/or a slow, careful typist who may be looking for completions to boost throughput, the results emphasize word completions based on the, i.e. exact-tap, input sequence so far.

An embodiment of the invention provides typing correction and spell-check features that allow such systems as those which incorporate the “SloppyType” technology described above to be more useful to all typists, particularly on non-desktop devices. A “SloppyType” system provides an enhanced text entry system that uses word-level disambiguation to automatically correct inaccuracies in user keystroke entries. Specifically, a “SloppyType” system provides a text entry system comprising: (a) a user input device comprising a touch sensitive surface including an auto-correcting keyboard region comprising a plurality of the characters of an alphabet, wherein each of the plurality of characters corresponds to a location with known coordinates in the auto-correcting keyboard region, wherein each time a user contacts the user input device within the auto-correcting keyboard region, a location associated with the user contact is determined and the determined contact location is added to a current input sequence of contact locations; (b) a memory containing a plurality of objects, wherein each object is a string of one or a plurality of characters forming a word or a part of a word, wherein each object is further associated with a frequency of use; (c) an output device with a text display area; and (d) a processor coupled to the user input device, memory, and output device, said processor comprising: (i) a distance value calculation component which, for each determined contact location in the input sequence of contacts, calculates a set of distance values between the contact locations and the known coordinate locations corresponding to one or a plurality of characters within the auto-correcting keyboard region; (ii) a word evaluation component which, for each generated input sequence, identifies one or a plurality of candidate objects in memory, and for each of the one or a plurality of identified candidate objects, evaluates each identified candidate object by calculating a matching metric based on the calculated distance values and the frequency of use associated with the object, and ranks the evaluated candidate objects based on the calculated matching metric values; and (iii) a selection component for (a) (1) identifying one or a plurality of candidate objects according to their evaluated ranking, (2) presenting the identified objects to the user, enabling the user to select one of the presented objects for output to the text display area on the output device.

Optimizations

Theoretically, any word in a vocabulary could be considered to be a correction, given a large enough edit distance score. However, database processing must occur in real-time as the user is typing, and there is a limit to the available processing power and working memory, especially for mobile devices. Thus, it is important to optimize all parts of the combined edit distance algorithms and eliminate processing steps when possible. For example, a first-level criterion for discarding a possible word match is allowing only one edit/correction for every three actual inputs, up to a maximum of three edits against any one compared word.

Other performance enhancements can include, for example (without limitation):

    • Strategies for minimizing edit distance calculations, e.g. first pass calculating cells which may allow a comparison to be rejected entirely.
    • The system starts from the results of a previous pass, such as when the user inputs another letter; or temporarily whittles down the previous word list, e.g. showing a shortened, partial, or even blurred selection list, until the user pauses entry.
    • Levels of filtering, e.g. most to least strict, are applied before, during, or after edit distance matrix calculations are completed, e.g.:
      • First letter exact, otherwise withdraw target word from consideration;
      • First letter near-miss, regional, in probability set;
      • The first letter of the vocabulary word must match one of the first two inputs, e.g. allows one add, one drop, or one transposed pair;
      • The first letter of the vocabulary word must be in the probability set of one of the first two inputs;
      • Other filtering concepts and variations may be applied; and
      • No filtering.

Word Frequency may be approximated, based on Zipf's Law, which states that given some corpus of natural language utterances, the frequency of any word is inversely proportional to its rank in the frequency table. Thus, the most frequent word occurs approximately twice as often as the second most frequent word, which occurs twice as often as the fourth most frequent word, etc. In an embodiment, the approximation is used, rather than a value stored for each word in a vocabulary database:
F n =F 1 /n(frequency of Nth word is frequency of 1st word divided by word position  (2)

Other tunable configuration parameters may include:

    • Number of word completions per near-miss section;
    • Number of spell corrections; and
    • Spell correction mode, either standard edit-distance or set-edit-distance (with or without letter probabilities).
      Spell Correction Performance

Spell correction on a large word list is a very CPU intensive task, and even more so when memory is limited. Thus, to reach acceptable performance the whole system must be optimized based on the spell correction characteristics chosen. The resulting system thus becomes quite inflexible from a feature perspective. Without specific optimizations performance may be a magnitude or two worse.

Spell correction performance depends mostly on the following:

    • Spell correction properties, like allowed edits, modes, and filters
    • The “fuzzy compare” function (that decides if a word matches the input or not)
    • The low level LDB search function
    • The LDB format (structure and behavior)
    • The number of words in the LDB and their length distribution
    • How ambiguous the KDB is for the LDB

Each of these elements are described in more detail in the following sections.

Spell Correction Properties

Allowed Edits

The number of allowed edits is a very important performance factor. The more edits the more ambiguity in the compare and thus many more words match and go into the selection list for prioritization. If the compare is too generous the effect is that too many unwanted words get into the list.

In a preferred embodiment, the number of allowed edits is related to input length and one edit is granted for every third input up to a maximum of three. This parameter of one edit per three inputs is assumed throughout the examples below.

Modes and Filters

Modes and filters are used to control the result set as well as performance. Two examples of modes are exact input and regional. On a touchscreen soft keyboard, for example, the user can tap exactly on the desired letter as well as indicating an approximate region of letters. In exact input mode, only the exact-tap letter from each user input is considered. In regional mode, some or all of the nearby letters indicated by each user input are considered.

Spell correction against exact input reduces ambiguity and makes the candidates look more like what's entered (even if what is entered is incorrect). It is effective with KDBs that feature exact-tap values, such as touchscreen soft keyboards. 12 key systems (for standard phone keypads) may have no useful exact-tap value; each keypress may be represented by the key's digit instead of one of the letters, and there is no way to intuit that one letter on each key is more likely than the others to be the intended one.

Unfortunately for 12 key systems, the KDBs behave as a generous regional mode layout, i.e. each input produces at least 3 letters per set, often many more when accented vowels are included, while not having an exact-tap value that can be used for exact input mode and filtering.

A filter is a screening function that ends further consideration of a candidate word if it does not meet established minimum criteria. For example, the ONE/TWO filters are mostly for performance improvement, making the first character in the word correlate stronger with the first or second input and rejecting any candidate words that do not conform.

The “Fuzzy Compare” Function

The fuzzy compare function allows a certain difference between the input and the word being compared, the edit distance. The idea is to calculate the edit distance and then based on the value either pass or reject the word.

Calculating the exact edit distance is expensive performance-wise. A solution to that is to place a screening mechanism prior to the real calculation. It is acceptable to “under” reject within reason, but “over” rejection should be avoided if at all possible. Words that pass through screening because of “under rejection” is taken out later, after the real distance calculation.

The quick screening is crucial for maintaining acceptable performance on each keypress. Potentially a huge amount of words can be coming in for screening and normally only a fraction gets through. Thus, for good performance everything before the screening must also be very efficient. Things done after the screening is less important performance wise, but there is still a decent amount of data coming through, especially for certain input combinations where thousands of words makes it all the way into the selection list insert function.

In one or more embodiments, spell correction works alongside the probability set comparison logic of regional auto-correction. There are words that are accepted by set comparisons that are not accepted based on the spell correction calculation. This is the case for regional input when spell correction is set up in exact input mode or when using exact filters. Word completion is also simpler for classic compare while costing edits in spell correction.

In the preferred embodiment, the fuzzy compare steps are:

    • 1. Screen for too short words
    • 2. Screen for set-based match
    • 3. Calculate stem edit-distance
    • 4. Screen for ONE/TWO
    • 5. Screen for set-edit-distance
    • 6. Screen for position-locked characters
    • 7. Calculate set-edit-distance and frequency
    • 8. Calculate stem edit-distance

These steps are illustrated as a flow diagram in FIG. 6, representing one implementation of the calculations 130 in FIG. 1.

Screening for classic compare and dealing with word completions, etc., is placed at step 2 before further spell correction calculations. That takes all the “classic” complexity out of the subsequent code. It also means that when spell correction is turned off, all other calculations can be skipped.

The algorithm is pictured as comparing two words against each other. In most embodiments this is generalized so that one word corresponds to the input symbols. In the sample matrixes in the figures referenced below, the input sequence is shown vertically. Thus, rather than each input word position being a single character as with standard Edit Distance, it is really a set of characters corresponding to ambiguous or regional input. A compare yields a match if any of the characters in the set is a match.

1. Screen for Too Short Words

If a word is too short even for spell correction, that is, shorter than the input length minus the available edit distance, then it can be rejected immediately.

2. Screen for Set-Based Matches

This is an iteration over the input sequence, verifying that each position is a match to the corresponding position in the compared word; i.e. each letter in the candidate word must be present in each input set.

If there is a non-match and the word is too long for spell correction, i.e. if it is longer than the input length, plus the available edit distance, then it can be rejected immediately.

3. Calculate Stemedit-Distance

This is an iteration over all symbols in the input sequence, and is only performed when there is a set-based match. Every difference from an exact-tap value increases the stem distance; e.g. the candidate word “tomorrow” might have a stem distance of 0 for an exact-tap input of “tom” and 1 for “tpm”. The word tap frequency is also calculated during the iteration.

If it is a valid classic match, the “fuzzy compare” of the candidate word is complete at this point. The candidate word is inserted into the selection list.

4. Screen for ONE/TWO

This is a quick check to see if the first character in the word matches the first ONE or TWO input symbols. If not, then the word is rejected.

5. Screen for Set-Edit-Distance

Conceptually this is a very simple task because enhanced edit distance follows the traditional definition using insert, delete, and substitution plus transpose (the last is commonly included for text entry correction). Doing it in an efficient way is much harder though.

The traditional way of calculating edit distance is using a matrix. An example is shown in FIG. 7. All edges (grey numbers) are predefined and always the same. The rest is calculated by traversing left-to-right and top-to-bottom, columns first. Each individual position is calculated by taking the minimum of the values that corresponds to insert, delete, substitute, and transpose. The substitute and transpose values are conditioned on whether there is a match for those positions. The resulting edit distance is found in the lower right corner, “2” in this case.

To find the values based on the cell that is being calculated, i.e. the cell marked with ‘X’ in FIG. 8: The cost for taking the substitution (‘S’) cell is zero or one depending on if there is a match. The transpose (‘T’) cell can only be taken if both characters, i.e. the current and preceding characters, match and then the cost is one. Insert (‘I’) and delete (‘D’) are also a cost of one each. Thus, the cell's cost is the already calculated cost for that cell plus the additional cost just mentioned.

This is computationally a very expensive way to calculate the distance, especially with long words. In one embodiment, a maximum allowable edit distance is set and so that 1% or less of the words pass that limit. If the allowed distance is too high the whole word list might make it into the selection list and the whole idea of spell correction is lost. Thus, initially the exact distance is not of concern; rather just whether the result is below or above the rejection limit. For those few words that pass this test more effort can then be spent on calculating exact distance, frequency, etc.

The goal of the screening step is to, as quickly as possible, prove that the resulting distance is above the rejection limit.

Consider the situation when the compared words match, except for length, as shown in FIG. 9. It is not possible for any of the cells to have a value that is lower. Comparing length 6 and length 9 words results in an edit distance of 3, as expected.

This initial matrix can be used when calculating any two words. Only the values in cells that are actually chosen for comparison need be updated along the way. The goal becomes to push the lower right cell above its rejection limit. To do so, it must be proven that any of the cells it relies on to get this value actually has a higher value, and so on recursively.

For this example, with length difference 3 and the first character not matching (changing the first ‘x’ to ‘y’ in FIG. 10A), rejection can be proved by only calculating fourcells; the rest of the related cell updates are implicit. The iterations in FIG. 10B show the recalculated cells (bold outline) and the effect on other dependent cells at each iteration.

The result is that the center diagonal and those towards the diagonal with the result value get increased values. This happens every time the last cell, that supports the lowest value in another cell, gets increased as a result of a completed compare mismatch.

The shown matrixes only describe what happens when there is a word length difference. If the length difference is zero, the center diagonal becomes the main one and the support, i.e. a cell value high enough to affect the calculation, must come from both sides of the result diagonal to prove a reject.

Diagonals in computations make data access patterns harder to optimize (accessing actual memory corresponding to the positions). Operating in a rotated/transformed matrix space is a further optimization; see FIG. 11. The cells in the center diagonal (bold outline) become a single row. The new “9”s (shown in grey) are added to support default values for edge cells, i.e. a value sufficiently large that if referenced it immediately exceeds the maximum possible edit-distance. In this transformed space the cell calculation relationships change as shown in FIG. 12.

6. Screen for Position-Locked Characters

Because a full classic compare was not performed on a spell correction candidate, there is still a need to verify input symbols that have locked positions, i.e. not allowed to move or change value. This is just an iteration over input symbols with locked positions, checking that they match. If not, then the word is rejected.

7. Calculate Set-Edit-Distance and Frequency

The algorithm to screen for edit distance can be modified to calculate the edit distance and other things such as word frequency. It should not, however, be merged into the screening code. That code has to be kept separate and optimized for pure screening. A different version gets applied to the words that pass the screening, one that is more exhaustive because it has to evaluate different cells and pick the best choices for low distance and high frequency. It also has to deal with things, such as possible locked symbol values (just value, not position).

Candidate is rejected if the set-edit-distance value exceeds a certain threshold.

8. Calculate Stem Edit-Distance

This is also a modified copy of the screening algorithm, for two reasons:

First, the stem distance can be very different because it is always based on the exact match. Thus, the value can become higher than the intended maximum for distance. Distance values higher than the maximum might not be fully accurate because of algorithm optimizations, but it is still good enough.

Second, the stem distance is also different in that it might not take into account the full length of the candidate word. To be compatible with non spell corrected words, the stem distance calculation will stop at the length of the input. Some additional checking is needed around the end cell to get the minimum value depending on inserts and deletes.

Low Level LDB Search Function

The fuzzy compare function can be made very efficient in screening and calculation, but that alone is not enough for good performance, particularly on embedded platforms. Depending on the input, almost all words in a vocabulary can be potential spell correction candidates. This usually happens when entering the 9th and 10th inputs in most languages, when one edit is allowed per three inputs.

At input length 9 all words with length 6-12 are potential spell correction candidates and everything longer than 12 are potential completion candidates. For example, at input length 9, over 70% of a Finnish vocabulary might be considered for comparison based on spell correction and another 20% based on word completion. This creates significant efficiency problems since spell correction requires the most computational effort. The following strategies seek to increase the efficiency of the database retrieval process by integrating one or more of the screening functions described earlier.

Search Strategy for No Spell Correction

The preferred embodiment of the vocabulary database, as described in Unruh; Erland, Kay; David. Jon, Efficient Storage and Search Of Word Lists and Other Text, U.S. patent application Ser. No. 11/379,354 (filed Apr. 19, 2006) which is incorporated by reference, is designed and optimized for searching words without spell correction. The whole input length is directly mapped to interval streams and the sparsest streams are visited first to aid quick jumping in the word list. Once there is a match, completion characters can be picked up from streams not mapped to the input.

With this strategy too short words are automatically skipped because they do not have characters matching the corresponding input.

Search Strategy for Spell Correction

With spell correction the words in the LDB falls into three categories depending on the input length. These are:

    • Too short words
    • Long words that can become completions
    • Words applicable for spell correction (certain length difference from the input length)

Each of these categories are described in the following sections.

Too Short Words

These can easily be skipped over by checking the interval stream corresponding to the last character in the shortest allowed word; For example, if the minimum length is 6, then the 6th interval stream must not be empty (have the terminating zero); if empty, then it is possible to directly jump to the end of the interval.

Long Words

Just as a special interval stream can be used to check for too short words another stream can be used to check for long words. For example, if the maximum length is 12, then the 13th stream decides if a word is long or not.

Long words can be handled exactly the same way as if spell correction was turned off. Streams mapped to the input can be used for jumping and the completion part is picked up from the rest of the streams.

Spell Correction Words

Unlike the previous two categories which can be efficiently searched, all words that fall into this category basically have to be sent on for edit distance calculation. That is not feasible, performance-wise, though screening function is needed at the LDB search level. As long as it provides a performance gain, this screening can be quite under-rejecting.

A complicating factor is that the spell correction modes and filters might operate in exact mode while the input still is set-based, and thus non-spell correction candidates might be set-based matches while spell correction ones can not use set-based info. The consequence is that any screening process must adhere to the set-based comparison logic as well.

An aspect of the LDB retrieval screening function for a preferred embodiment is illustrated in FIG. 13. With set-based comparison logic, the target word does not match the input sequence because the 4 GHI key does not include “d” in its set. But the set-edit-distance comparison logic allows for any input to be inserted, deleted, or transposed. Therefore, the set represented by each input expands to the union of sets including adjacent keys. The number of adjacent keys included depends on constraint parameters such as the number of allowed edits.

Many of the screening functions from the fuzzy compare function may be adapted and integrated into the database retrieval process, as described in the following paragraphs.

Filter ONE/TWO

Filter ONE and TWO can be used for jumping. If interval stream zero (first character in the word) does not match the corresponding input (first or second input, depending on the filter) a jump can take place.

If the filter setting (exact input or regional) does not match the set-based comparison logic, then it must be accompanied by a failing stream. The resulting jump is limited to the shorter of the two (nearest end in one of the two streams). This filter only applies to spell correction candidates.

Input Based Screening

Even though the available edits can make words match, that look quite different than the input, there are still limitations on what can match. A limited number of available edits means that only a limited number if inserts and deletes can be applied, and thus there is a limitation in how far away a character in a word can be from the input related stream and still count as a match.

This screening can be applied independent of filters, but the filters can be made part of the screening in an efficient way. The screening must be very fast, so the complexity must be kept low.

To reject a word, one miss more than the available number of edits is needed. For example, for edit distance 3, 4 misses must be found. If there are 9 inputs and the compared word has length 6, compare up to length 9 because position 7, 8 and 9 have the zero as termination code and that always fails to compare with any input union. If the word is longer than the input, compare up to the length of the word.

Length-Independent Screening

One solution to screening when the word length is not predetermined is to set up a second, fabricated, input that can be used for screening matching. It is fabricated in a way so that every position becomes a union of the surrounding original positions.

For input length 9, the union map looks like that shown in FIG. 14. Every “lxx” row is a position in the input. Each column is a position in the word that's being compared. For example, the fourth character in the word might match any of the first 7 inputs and would not count as a used edit. The 12th character can only match the 9th input though, so that is much more restrictive.

If any character in the word fails to match the union it counts as a miss and thus calls for a potential edit. With enough misses the word can be discarded by this screening.

If a word is shorter than the input, then that difference can be subtracted from available edits immediately and the comparison only needs to check the available positions. Thus, if the length difference is the same as the number of available edits, only one position has to fail to reject the word.

The same restrictions apply here as it did for the filters. If there is an exact/regional significance then a rejection must be accompanied by a failing set-based interval stream.

The longest possible jump is to the nearest end of a failing interval stream, whether union or set-based.

Because there is a requirement for a failing set-based stream to exist to be able to make a jump, there is no need to further restrict the jump with regards to change in word length category.

Length-Dependent Screening

In the preferred embodiment of length-dependent screening, calculating the length of the compared word can restrict the unions to what is applicable for that length. For example, for length 6 and input length 9 the union map look like that of FIG. 15.

This features more limited unions, but with the added cost of finding the word length to choose the unions. It also limits the possible jump length to within a chunk of words with the same length because, as soon as the length changes, so does the unions. Thus, it is also a requirement to minimize the number of word length changes throughout the LDB.

Apart from having length dependent patterns, the description of independent screening applies here as well.

Selection List Ordering Strategies and Algorithms

The result of the combined algorithms is a list of word choices for selection that includes, in most likely order, either of 1. the word that the user has already typed, if the input sequence is complete, or 2. the word that the user has begun to type, if the input sequence represents the stem of a word or phrase.

The word list sort order may be based on factors of regional probability, edit distance, word recency/frequency (as stored in each database), word length, and/or stem edit distance. Word list ordering may also depend on which of two or more different list profiles or strategies is being used. For example:

Full-Word Priority

    • 1. Full word always comes before word completion;
    • 2. Source dictionary, e.g. main vocabulary, contextual, user-defined, recency ordered, plug-in, macro substitution;
    • 3. Edit distance, e.g. smaller value ahead of greater;
    • 4. Stem edit distance, e.g. smaller first; and only if Edit Distance>0 and the same for both word choices;
    • 5. Frequency, e.g. largest first; Tap Frequency×Word Frequency.

Note the order of evaluation is as above, e.g. criterion 3 is only considered if criterion 2 is the same for the compared items. Because of this, for example, spell corrections on custom user words can appear ahead of regional corrections for standard vocabulary words.

Word Completions Promoted

    • 1. Stem edit distance;
    • 2. Word completion or not;
    • 3. Source;
    • 4. Edit distance;
    • 5. Frequency.

Because stem edit distance is the first criterion, completion is the second, etc., the word list effectively gets segmented as:

    • full word with 0 misses, the exact-tap input sequence is the same as the word
    • completion(s) with 0-miss stem(s)
    • full word(s) with 1 near-miss
    • completion(s) with 1 near-miss stem(s)
    • . . .

The system may allow the basic strategy to be specified. It may also automatically adapt the ordering based on recognized patterns of word selection, over and above the frequency/recency information recorded in the source databases. For example, the system may detect that most of the time the user selects a word completion whose first letters exactly match the input so far, and so may shift the word list ordering bias towards the “Completions Promoted” profile.

FIG. 16 illustrates a sample user interface during operation of an embodiment of the invention; in this case, showing set-edit-distance spell correction with regional auto-correction. In this embodiment on a mobile device, the candidate words appear across the bottom of the screen upon each user input. The string at the left, shown in italics, is the exact-tap letter sequence, which for this device is each key pressed on its QWERTY thumbboard. The arrowhead indicates the default (highest ranked) word choice. The second screen shows three word completions offered after the keys “b” and “o” have been pressed. The third screen shows “bowl” as a candidate, which is a close match to the input sequence “bok” if the letter “w” is inserted (standard edit-distance of 1) in the middle and the “l” is adjacent to the “k” on the keyboard (using regional auto-correction). The fifth screen shows “going” as the default word choice, because the “g” and “i” are each adjacent to the inputs of “b” and “k”; shown as second word choice is “being”, which substituted “e” for the “o” (edit-distance of 1). The correction parameters of this embodiment penalize regional auto-correction differences less than edit-distance differences.

Other Features and Applications

Auto-substitution, e.g. macros: Regional and spell correction may both apply to the shortcut, although word completion can apply to the expanded text. Thus, if an input sequence approximately matches both the shortcut and the stem of the expanded text, the ranking of the macro may be increased. Macros may be predefined or user-definable.

Keyword flagging, for advertising purposes, could benefit from auto-substitution and/or spell correction. For example, if the word in the mobile message was text slang or misspelled, the invention could still find a valid sponsored keyword.

An embodiment of the invention could be applied to an entire message buffer, i.e. batch mode, whether its text was originally entered ambiguously or explicitly, e.g. via multi-tap, or received as a message or file from another device.

The spell-corrected word choice can become the basis for further inputs, word completions, etc., if the input method permits auto-extending a word choice, including build-around rules with punctuation, etc. In one embodiment, a cascading menu pops up with a list of word completions for the selected word or stem.

The invention can also be applied to ambiguous entry for search and discovery. For example, if the user's input sequence is not closely matched by the content of the mobile device or the contents of server-based search engines, one or more spell-corrected interpretations which do result in matches may be offered.

While the examples above illustrate the invention's use with Latin-based languages, other embodiments may address the particular needs of other alphabets or scripts.

Although the invention is described herein with reference to the preferred embodiment, one skilled in the art will readily appreciate that other applications may be substituted for those set forth herein without departing from the spirit and scope of the present invention. Accordingly, the invention should only be limited by the Claims included below.

Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US398086925 Nov 197414 Sep 1976Litton Business Systems, Inc.Rotatable keyboard
US428632917 Dec 197925 Aug 1981International Business Machines CorporationComplex character generator
US436523531 Dec 198021 Dec 1982International Business Machines CorporationChinese/Kanji on-line recognition system
US443964930 Aug 198227 Mar 1984Suncom, IncorporatedJoy stick switch
US445459220 Nov 198012 Jun 1984International Business Machines CorporationPrompt line display in a word processing system
US455959822 Feb 198317 Dec 1985Eric GoldwasserMethod of creating text using a computer
US456110519 Jan 198324 Dec 1985Communication Intelligence CorporationComplex pattern recognition method and system
US457319619 Jan 198325 Feb 1986Communications Intelligence CorporationConfusion grouping of strokes in pattern recognition method and system
US468976816 Jan 198625 Aug 1987International Business Machines CorporationSpelling verification system with immediate operator alerts to non-matches between inputted words and words stored in plural dictionary memories
US471075826 Apr 19851 Dec 1987Westinghouse Electric Corp.Automatic touch screen calibration method
US472569413 May 198616 Feb 1988American Telephone And Telegraph Company, At&T Bell LaboratoriesComputer interface device
US478246426 Dec 19851 Nov 1988Smith Corona CorporationCompact spelling-check dictionary
US47837585 Feb 19858 Nov 1988Houghton Mifflin CompanyAutomated word substitution using numerical rankings of structural disparity between misspelled words & candidate substitution words
US478376126 Dec 19858 Nov 1988Smith Corona CorporationSpelling check dictionary with early error signal
US489177713 May 19862 Jan 1990The Laitram CorporationSingle hand keyboard arrays providing alphanumeric capabilities from twelve keys
US489178625 Jun 19852 Jan 1990Goldwasser Eric PStroke typing system
US51093529 Aug 198828 Apr 1992Dell Robert B OSystem for encoding a collection of ideographic characters
US512705511 Feb 199130 Jun 1992Kurzweil Applied Intelligence, Inc.Speech recognition apparatus & method having dynamic reference pattern adaptation
US51874803 Dec 199016 Feb 1993Allan GarnhamSymbol definition apparatus
US522417930 Dec 199129 Jun 1993At&T Bell LaboratoriesImage skeletonization method
US5261112 *28 Aug 19909 Nov 1993Casio Computer Co., Ltd.Spelling check apparatus including simple and quick similar word retrieval operation
US530520523 Oct 199019 Apr 1994Weber Maria LComputer-assisted transcription apparatus
US53175077 Nov 199031 May 1994Gallant Stephen IMethod for document retrieval and for word sense disambiguation using neural networks
US534729531 Oct 199013 Sep 1994Go CorporationControl of a computer through a position-sensed stylus
US545745421 Sep 199310 Oct 1995Fujitsu LimitedInput device utilizing virtual keyboard
US546271128 Apr 199431 Oct 1995Ricottone; Jacqueline L.Disposable beaker sheath
US55331472 Nov 19942 Jul 1996Canon Kabushiki KaishaSegmentation of handwritten patterns using stroke and jump vectors
US55614467 Feb 19951 Oct 1996Montlick; Terry F.Method and apparatus for wireless remote information retrieval and pen-based data entry
US5572423 *23 Jan 19955 Nov 1996Lucent Technologies Inc.Method for correcting spelling using error frequencies
US557448231 Oct 199512 Nov 1996Niemeier; Charles J.Method for data input on a touch-sensitive screen
US557717023 Dec 199319 Nov 1996Adobe Systems, IncorporatedGeneration of typefaces on high resolution output devices
US558394630 Sep 199310 Dec 1996Apple Computer, Inc.Method and apparatus for recognizing gestures on a computer system
US558619812 Oct 199417 Dec 1996Lakritz; DavidMethod and apparatus for identifying characters in ideographic alphabet
US56126903 Jun 199318 Mar 1997Levy; DavidCompact keypad system and method
US561603115 May 19951 Apr 1997Atari Games CorporationSystem and method of shadowing an object in motion
US564922317 Dec 199115 Jul 1997Freeman; Alfred B.Word based text producing system
US566489629 Aug 19969 Sep 1997Blumberg; Marvin R.Speed typing apparatus and method
US567536123 Aug 19957 Oct 1997Santilli; Donald S.Computer keyboard pointing device
US573474927 Dec 199431 Mar 1998Nec CorporationCharacter string input system for completing an input character string with an incomplete input indicative sign
US573475015 Feb 199531 Mar 1998Canon Kabushiki KaishaCharacter recognition method and apparatus
US574571919 Jan 199528 Apr 1998Falcon; Fernando D.Commands functions invoked from movement of a control input device
US574851228 Feb 19955 May 1998Microsoft CorporationAdjusting keyboard
US57546868 Feb 199519 May 1998Canon Kabushiki KaishaMethod of registering a character pattern into a user dictionary and a character recognition apparatus having the user dictionary
US57840089 May 199721 Jul 1998International Business Machines Corp.Word processing
US579686712 Jun 199618 Aug 1998Industrial Technology Research InstituteStroke-number-free and stroke-order-free on-line Chinese character recognition method
US57987607 Jun 199525 Aug 1998Vayda; MarkRadial graphical menuing system with concentric region menuing
US579926917 May 199625 Aug 1998Mitsubishi Electric Information Technology Center America, Inc.System for correcting grammar based on parts of speech probability
US58059111 Feb 19958 Sep 1998Microsoft CorporationWord prediction system
US58126964 Jun 199722 Sep 1998Canon Kabushiki KaishaCharacter recognizing method and apparatus
US581269714 Feb 199722 Sep 1998Nippon Steel CorporationMethod and apparatus for recognizing hand-written characters using a weighting dictionary
US581843726 Jul 19956 Oct 1998Tegic Communications, Inc.Reduced keyboard disambiguating computer
US58289996 May 199627 Oct 1998Apple Computer, Inc.Method and system for deriving a large-span semantic language model for large-vocabulary recognition systems
US58704924 Jun 19929 Feb 1999Wacom Co., Ltd.Hand-written character entry apparatus
US589632114 Nov 199720 Apr 1999Microsoft CorporationText completion system for a miniature computer
US591747624 Sep 199629 Jun 1999Czerniecki; George V.Cursor feedback text input method
US591788929 Dec 199529 Jun 1999At&T CorpCapture of alphabetic or alphanumeric character strings in an automated call processing environment
US592030315 Dec 19976 Jul 1999Semantic Compaction SystemsDynamic keyboard and method for dynamically redefining keys on a keyboard
US592379328 Dec 199513 Jul 1999Nec CorporationHandwritten character recognition apparatus with an improved feature of correction to stroke segmentation and method for correction to stroke segmentation for recognition of handwritten character
US592656615 Nov 199620 Jul 1999Synaptics, Inc.Incremental ideographic character input method
US592858813 Nov 199727 Jul 1999Cuno, IncorporatedPorous filter structure and process for the manufacture thereof
US59335267 Jun 19953 Aug 1999Ast Research, Inc.Handwritten keyboardless entry computer system
US593742023 Jul 199610 Aug 1999Adobe Systems IncorporatedPointsize-variable character spacing
US595294221 Nov 199614 Sep 1999Motorola, Inc.Method and device for input of text messages from a keypad
US595354124 Jan 199714 Sep 1999Tegic Communications, Inc.Disambiguating system for disambiguating ambiguous input sequences by displaying objects associated with the generated input sequences in the order of decreasing frequency of use
US595602120 Sep 199621 Sep 1999Matsushita Electric Industrial Co., Ltd.Method and device for inputting information for a portable information processing device that uses a touch screen
US596367115 Aug 19975 Oct 1999International Business Machines CorporationEnhancement of soft keyboard operations using trigram prediction
US597367630 Jan 199626 Oct 1999Kabushiki Kaisha ToshibaInput apparatus suitable for portable electronic device
US600239021 Nov 199714 Dec 1999Sony CorporationText input device and method
US600279923 Jan 199514 Dec 1999Ast Research, Inc.Handwritten keyboardless entry computer system
US600549527 Feb 199721 Dec 1999Ameritech CorporationMethod and system for intelligent text entry on a numeric keypad
US600879924 May 199428 Dec 1999Microsoft CorporationMethod and system for entering data using an improved on-screen keyboard
US600944424 Feb 199728 Dec 1999Motorola, Inc.Text input device and method
US6011554 *26 Jul 19964 Jan 2000Tegic Communications, Inc.Reduced keyboard disambiguating system
US601870826 Aug 199725 Jan 2000Nortel Networks CorporationMethod and apparatus for performing speech recognition utilizing a supplementary lexicon of frequently used orthographies
US60289596 Apr 199922 Feb 2000Synaptics, Inc.Incremental ideographic character input method
US603794210 Mar 199814 Mar 2000Magellan Dis, Inc.Navigation system character input device
US604113725 Aug 199521 Mar 2000Microsoft CorporationRadical definition and dictionary creation for a handwriting recognition system
US604416515 Jun 199528 Mar 2000California Institute Of TechnologyApparatus and method for tracking handwriting from visual input
US605213020 Nov 199618 Apr 2000International Business Machines CorporationData processing system and method for scaling a realistic object on a user interface
US605494127 May 199725 Apr 2000Motorola, Inc.Apparatus and method for inputting ideographic characters
US607546911 Aug 199813 Jun 2000Pong; Gim YeeThree stroke Chinese character word processing techniques and apparatus
US60886495 Aug 199811 Jul 2000Visteon Technologies, LlcMethods and apparatus for selecting a destination in a vehicle navigation system
US609419717 May 199525 Jul 2000Xerox CorporationGraphical keyboard
US609803418 Mar 19961 Aug 2000Expert Ease Development, Ltd.Method for standardizing phrasing in a document
US610431727 Feb 199815 Aug 2000Motorola, Inc.Data entry device and method
US610438412 Sep 199715 Aug 2000Ericsson, Inc.Image based keyboard for a small computing device
US611157313 Feb 199829 Aug 2000Velocity.Com, Inc.Device independent window and view system
US61309624 Jun 199810 Oct 2000Matsushita Electric Industrial Co., Ltd.Information retrieval apparatus for enabling information retrieval with ambiguous retrieval key
US614476420 Jan 19987 Nov 2000Mitsui High-Tec, Inc.Method and apparatus for on-line handwritten input character recognition and recording medium for executing the method
US614810414 Jan 199914 Nov 2000Synaptics, Inc.Incremental ideographic character input method
US615737921 May 19985 Dec 2000Ericsson Inc.Apparatus and method of formatting a list for display on a touchscreen
US616953813 Aug 19982 Jan 2001Motorola, Inc.Method and apparatus for implementing a graphical user interface keyboard and a text buffer on electronic devices
US61726256 Jul 19999 Jan 2001Motorola, Inc.Disambiguation method and apparatus, and dictionary data compression techniques
US620484814 Apr 199920 Mar 2001Motorola, Inc.Data entry apparatus having a limited number of character keys and method
US62122977 Jun 19953 Apr 2001Samsung Electronics Co., Ltd.Handwritten keyboardless entry computer system
US62154853 Apr 199810 Apr 2001Avid Technology, Inc.Storing effects descriptions from a nonlinear editor using field chart and/or pixel coordinate data for use by a compositor
US622305922 Feb 200024 Apr 2001Nokia Mobile Phones LimitedCommunication terminal having a predictive editor application
US627561117 Oct 199614 Aug 2001Motorola, Inc.Handwriting recognition device, method and alphabet, with strokes grouped into stroke sub-structures
US627844527 Aug 199621 Aug 2001Canon Kabushiki KaishaCoordinate input device and method having first and second sampling devices which sample input data at staggered intervals
US62857682 Jun 19994 Sep 2001Nec CorporationNoise cancelling method and noise cancelling unit
US628606424 Jun 19994 Sep 2001Tegic Communications, Inc.Reduced keyboard and method for simultaneous ambiguous and unambiguous text input
US630754824 Sep 199823 Oct 2001Tegic Communications, Inc.Reduced keyboard disambiguating system
US6643647 *4 Apr 20014 Nov 2003Kabushiki Kaisha ToshibaWord string collating apparatus, word string collating method and address recognition apparatus
US7277088 *25 Mar 20052 Oct 2007Tegic Communications, Inc.Keyboard system with automatic correction
US7283999 *19 Dec 200316 Oct 2007Ncr Corp.Similarity string filtering
US7584173 *9 Feb 20041 Sep 2009Avaya Inc.Edit distance string search
US7720682 *7 Feb 200618 May 2010Tegic Communications, Inc.Method and apparatus utilizing voice input to resolve ambiguous manually entered text input
US20040260694 *20 Jun 200323 Dec 2004Microsoft CorporationEfficient fuzzy match for evaluating data records
US20050174333 *25 Mar 200511 Aug 2005Robinson B. A.Keyboard system with automatic correction
US20060176283 *5 Apr 200610 Aug 2006Daniel SuraquiFinger activated reduced keyboard and a method for performing text input
US20070040813 *20 Sep 200622 Feb 2007Forword Input, Inc.System and method for continuous stroke word-based text input
US20070050360 *31 Jul 20061 Mar 2007Hull Jonathan JTriggering applications based on a captured text in a mixed media environment
US20070203894 *28 Feb 200630 Aug 2007Rosie JonesSystem and method for identifying related queries for languages with multiple writing systems
US20070276653 *23 May 200629 Nov 2007Greenwald Charles MMethod and apparatus for multilingual spelling corrections
US20080133222 *28 Nov 20075 Jun 2008Yehuda KoganSpell checker for input of reduced keypad devices
US20080291059 *1 May 200827 Nov 2008Longe Michael RMultiple predictions in a reduced keyboard disambiguating system
US20090037399 *31 Jul 20075 Feb 2009Yahoo! Inc.System and Method for Determining Semantically Related Terms
US20090089665 *28 Sep 20072 Apr 2009Shannon Ralph Normand WhiteHandheld electronic device and associated method enabling spell checking in a text disambiguation environment
US20090105959 *29 May 200823 Apr 2009Braverman Michael SSystem and method for identification of individual samples from a multiplex mixture
US20090226098 *18 May 200710 Sep 2009Nagaoka University Of TechnologyCharacter string updated degree evaluation program
US20090234826 *17 Mar 200617 Sep 2009Activeprime, Inc.Systems and methods for manipulation of inexact semi-structured data
US20100325136 *23 Jun 200923 Dec 2010Microsoft CorporationError tolerant autocompletion
US20110193797 *4 Nov 201011 Aug 2011Erland UnruhSpell-check for a keyboard system with automatic correction
US20110234524 *25 Mar 201129 Sep 2011Longe Michael RVirtual Keyboard System with Automatic Correction
Non-Patent Citations
Reference
1"Handbook for the Palm V(TM) Organizer", Palm Computing, Inc., Santa Clara, CA, 1998-1999, Total of 244 pages.
2"PCT International Search Report and Written Opinion", Aug. 30, 2010, Total of 13 pages.
3"Pilot POBox (Jul. 1997)", http://www.csl.sony.co.jp/person/maui/POBox/ilot.html, Printout form dated Jan. 3, 2012, no translation provided, Total of 7 pages.
4"POBox Example 2", http://www.csl.sony..co.jp/person/masui/POBox/example2.html, Printout form dated Jan. 3, 2012, no translation provided, Total of 2 pages.
5"Quick Stroke Information", Synaptics, retrieved on Nov. 18, 2006 from website: www.synaptics.com/products/quickstroke-faq.cfm and www.synaptics.com/products/quickstroke.cfm, Total of 4 pages.
6"Softava Q12 Mobile's Keypad", http://www.softava.com/q12, Printout dated Oct. 24, 2006, Total of 3 pages.
7"U.S. Appl. No. 12/488,375 Notice of Allowance mailed Jun. 9, 2011", 34.
8"U.S. Appl. No. 12/830,185 Final Office Action Mailed Jun. 24, 2011", 16.
9"U.S. Appl. No. 12/830,185 Notice of Allowance mailed Jul. 28, 2011", 5.
10"What is Fastap; What Fastap Does; How it Works", retrieved online on Oct. 24, 2006 from url: www.digitwireless.com, 3 pages.
11"Handbook for the Palm V™ Organizer", Palm Computing, Inc., Santa Clara, CA, 1998-1999, Total of 244 pages.
12"Quick Stroke Information", Synaptics, retrieved on Nov. 18, 2006 from website: www.synaptics.com/products/quickstroke—faq.cfm and www.synaptics.com/products/quickstroke.cfm, Total of 4 pages.
13Amin, A. et al., "Recognition of Hand-printed Chinese Characters Using Decision Trees/Machine Learning of C4.5 System", Pattern Analysis and Applications, vol. 1, Issue 2, 1998, 130-141.
14Arnott, John , "Probabilistic Character Disambiguation for Reduced Keyboards Using Small Text Samples", AAC Augmentative and Alternative Communication, vol. 8, No. 3, Dept. Math & comp. Sci.; Univ of Dundee, Dundee, Tayside, Scotland, Sep. 1992, 215-223.
15Chen, Ju-Wei et al., "A Hierarchical Representation for the Reference Database of On-Line Chinese Character Recognition", Advances in Structural and Syntactical Pattern Recognition. 6th International Workshop,, INSPEC Abstract No. C9702-1250B-021, 6th International Workshop, SSPR '96, Aug. 20-23, 1996, Total of 1 page.
16Cheng, R et al., "Recognition of Radicals in Handwritten Chinese Characters by Means of Problem Reduction and Knowledge Guidance", International Journal of Pattern Recognition and Artificial Intelligence, INSPEC Abstract No. C9706-5260B-280, Sep. 1996, Total of 1 page.
17Chou, Kuo-Sen et al., "Radical-Based Neighboring Segment Matching for On-Line Chinese Character Recognition", Computer Processing of Oriental Languages, INSPEC Abstract No. B9701-6140C-682, C9701-1250B-019, Apr. 1997, Total of 1 page.
18Chou, Kuo-Sen et al., "Radical-Based Neighboring Segment Matching for On-Line Chinese Character Recognition", Proceedings of the 13th International Conference on Pattern Recognition, INSPEC Abstract No. B9701-6140C-682, C9701-1250B-019, Aug. 25-26, 1996, Total of 1 page.
19Connell, S. et al., "Template-based Online Character Recognition", Department of Computer Science and Engineering, Michigan State University, East Lansing, Michigan, Aug. 10, 1999, 1-30.
20Fan, Fang et al., "An On-Line Handwritten Chinese Character Recognition System", Proceedings of the SPIE-The International Society for Optical Engineering, INSPEC Abstract No. C2000-12-5260B-085, Jan. 26-27, 2000, Total of 1 page.
21Fan, Fang et al., "An On-Line Handwritten Chinese Character Recognition System", Proceedings of the SPIE—The International Society for Optical Engineering, INSPEC Abstract No. C2000-12-5260B-085, Jan. 26-27, 2000, Total of 1 page.
22Garrett, M. et al., "Implementation of Dasher, an Information Efficient Input Mechanism", Presented at LINUX 2003 Conference, D. Ward, I. Murray, P. Cowans, and D. Mackay (Additional Authors), Edinburgh, Scotland, 07/11/20203, Total of 6 pages.
23Garrett, M. et al., "Implementation of Dasher, an Information Efficient Input Mechanism", Presented at LINUX 2003 Conference, D. Ward, I. Murray, P. Cowans, and D. Mackay (Additional Authors), Edinburgh, Scotland, Jul. 11, 2003, Total of 6 pages.
24Hung, Kwok-Wah et al., "Boxing Code for Stroke-Order Free Handprinted Chinese Characters Recognition", Proceedings of IEEE International Conference on Systems, Man, Cybernetics, INSPEC Abstract No. C2001-01-5260B-087, Oct. 8-11, 2000, Total of 1 page.
25Isokoski, P. , "Model for Unistroke Writing Time", CHI Letters: Human Factors in Computing Systems, SIGCHI 2001, vol. 3, No. 1, Mar. 31-Apr. 5, 2001, 357-364.
26Isokoski, P. , "Text Input Methods for Eye Trackers Using Off-Screen Targets", In Proceedings of Eye Tracking Research & Applications Symposium 2000, ACM, Nov. 6-8, 2000, 15-21.
27Isokoski, P. et al., "Architecture for Personal Text Entry Methods", In Closing the Gap: Software Engineering and Human-Computer Interaction, IFIP, 2003, 1-8.
28Isokoski, P. et al., "Combined Model for Text Entry Rate Development", CHI2003 Extended Abstracts, Apr. 5-10, 2003, 752-753.
29Isokoski, P. et al., "Comparison of Two Touchpad-Based Methods for Numeric Entry", CHI Letters: Human Factors in Computing Systems, CHI 2002, vol. 4 No. 1, Apr. 20-25, 2002, 25-32.
30Isokoski, P. et al., "Device Independent Text Input: A Rationale and an Example", Proceedings of the Working Conference on Advanced Visual Interfaces AVI2000, May 23-26, 2000, Palermo, Italy, 2000, 76-83.
31Isokoski, P. et al., "Report on the CHI2001 Workshop on Text Entry on Mobile Systems", SIGCHI Bulletin, MacKenzie, S. I. (Additional Author), Sep./Oct. 2001, 14.
32Isokoski, P. et al., "Text Entry on Mobile Systems: Directions for the Future", CHI 2001 Extended Abstracts, Mar. 31-Apr. 5, 2001, 495.
33Kim, Ki-Cheol et al., "On-Line Recognition of Stroke-Order Free Cursive Chinese Characters with Relaxation Matching", Journal of the Korea Information Science Society, NSPEC Abstract No. C9507-1250B-022, Mar. 1995, Total of 1 page.
34 *Kristenssion et al., Relaxing Stylus Typing Precision by Geometric Pattern Matching, ACM 2005, pp. 151-158.
35Kukich, Karen , "Techniques for Automatically Correcting Words in Text", ACM Computing Surveys, vol. 24, Dec. 1992, 377-439.
36Li, Xiaolin et al., "On-Line Handwritten Alphanumeric Character Recognition Using Feature Sequences", Department of Computer Science, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, 1995, Total of 8 pages.
37Lin, Chang-Keng et al., "Stroke-Order Independent On-Line of Handwritten Chinese Characters", Proceedings of the SPIE-The International Society for Optical Engineering, INSPEC Abstract No. C90031813, Nov. 8-10, 1989, Total of 1 page.
38Lin, Chang-Keng et al., "Stroke-Order Independent On-Line of Handwritten Chinese Characters", Proceedings of the SPIE—The International Society for Optical Engineering, INSPEC Abstract No. C90031813, Nov. 8-10, 1989, Total of 1 page.
39Liu, J.Z. et al., "Two-layer Assignment Method for Online Chinese Character Recognition", EEE Proceedings-Vision, Image and Signal Processing, INSPEC Abstract No. C2000-06-1250B-004, Feb. 2000, Total of 1 page.
40Liu, Jianzhuang et al., "Stroke Order and Stroke Number Free On-Line Chinese Character Recognition Using Attributed Relational Graph Matching", Proceedings of the 13th International Conference on Pattern Recognition, INSPEC Abstract No. C9701-1250B-035, Aug. 25-29, 1996, Total of 1 page.
41Mankoff, Jennifer , "Cirrin: A Word-Level Unistroke Keyboard for Pen Input", Proceedings of UIST 1998, Abowd, Gregory D.(Additional Author), Technical note, Nov. 1-4, 1998, 213-214.
42Mankoff, Jennifer , "Error Correction Techniques for Handwriting, Speech and other Ambiguous or Error Prone Systems", GVU TechReport, Abowd, Gregory D. (Additional Author), GIT-GVU-99-18, Jun. 1999, Total of 9 pages.
43Masui, T. , "An Efficient Text Input method for Pen-based Computers", Proceedings of the ACM Conf. on Human Factors in Computing Systems, ACM Press, Apr. 1998, 238-335.
44Masui, T. , "POBox: An Efficient Text Input Method for Handheld and Ubiquitous Computers", Sony Computer Science Laboratories, Apr. 1998, Total of 12 pages.
45Min, Kyongho , "Syntactic Recovery and Spelling Correction of III-formed Sentences", School of Computer Science and Engineering, Wilson, William H. (Additional Author), The University of New South Wales, Feb. 1998, 1-10.
46 *Mollineda et al. A Windowed Weighted Approach for Approximate Cyclic String Matching, Google 2002, pp. 188-193.
47Naito, S. et al., "Rough Classification for Handprinted Chinese Characters by Stroke Density", Transactions of the Institute of Electronics and Communication Engineers of Japan, INSPEC Abstract No. C82009693, Aug. 1981, Total of 1 page.
48Nambu, H. et al., "On-Line Chinese Handwriting Character Recognition: Comparison with Japanese Kanji Recognition and Improvement of Input Efficiency", Transactions of the Information Processing Society of Japan, NSPEC Abstract No. B2000-01-6135E-035, C2000-01-5260B-099, Aug. 1999, Total of 1 page.
49Odaka, K. et al., "Stroke Order Free On-Line Handwritten Character Recognition of Algorithm", Transactions of the Institute of Electronics and Communication Engineers of Japan, Section E, INSPEC Abstract No. C82041007, Jun. 1982, Total of 1 page.
50Pan, Bao-Chang et al., "Recognition of Handprinted Chinese Characters by Stroke Order Codes", International Conference on Computer Processing of Chinese and Oriental Languages, INSPEC Abstract No. C89024386, Aug. 29-Sep. 1, 1988, Total of 1 page.
51Park, Hee-Seon et al., "An On-line Recognition System for Cursive Chinese Characters with Effective Coarse Classification and Elastic Matching", Journal of the Korea Information Science Society, INSPEC Abstract No. C9404-1250B-001, Sep. 1993.
52PCT International Search Report and Written Opinion; 13 pages; Aug. 30, 2010.
53Perlin, K. , "Quikwriting: Continuous Stylus-Based Text Entry", Presented at ACM UIST'98 Conference, Nov. 1-4, 1998, 1998, 215-216.
54Perlin, K. , "Quikwriting: Continuous Stylus-Based Text Entry", Presented at ACM UIST'98 Conference, Nov. 1-4, 1998, 215-216.
55 *Quixal et al., Strategies for the generation of individualized feedback in distance language learning, Google 2007, pp. 1-8.
56Romero, R. et al., "Optical Chinese Character Recognition using Probabilistic Neural Networks", Imaging Systems Lab, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, Jul. 1996, 1-18.
57 *Sarr, Improving Precision and Recall Using a Spellchecker in a Search Engine, Google 2003, pp. 1-39.
58Seni, G. et al., "Large Vocabulary Recognition of On-Line Handwritten Cursive Words", Presented at IEEE Transactions on Pattern Analysis and Machine Intelligence, Jun. 1996, 1-6.
59Shin, J. , "Online Handwriting Character Analysis Using Stroke Correspondence Search", Journal of Shanghai University, Aizu University, Fukushima, Japan, INSPEC Abstract No. C2001-11-1250B-012, Sep. 2001, Total of 1 page.
60Srihari, S. et al., "Cherry Blossom: A System for Japanese Character Recognition", Center for Excellence for Document Analysis and Recognition, State University of New York at Buffalo, Buffalo, NY, 1997, Total of 15 pages.
61Stockton, R. et al., "JKanji: Wavelet-based Interactive Kanji Competition", Proceedings of the 15th International Conference on Pattern Recognition, Sep. 3-7, 2000, Total of 1 page.
62Vuurpijl, L. et al., "Coarse Writing-Style Clustering Based on Simple Stroke-Related Features", Institute for Cognition and Information, University of Nijmegen, Nijmegen, The Netherlands, 1997, Total of 6 pages.
63Zhai, Shumin , "Shorthand Writing on Stylus Keyboard", CHI 2003, Kristensson, Per-Ola (Additional Author), vol. 5 No. (1), 2003, 97-104.
64Zheng, Jing et al., "Recognizing On-Line Handwritten Chinese Character Via FARG Matching", Proceedings of the Fourth International Conference on Document Analysis and Recognition, Aug. 18-20, 1997, INSPEC Abstract No. B9711-6140C-162, C971-5260B-123, 1997, Total of 1 page.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US849000810 Nov 201116 Jul 2013Research In Motion LimitedTouchscreen keyboard predictive display and generation of a set of characters
US85439341 Aug 201224 Sep 2013Blackberry LimitedMethod and apparatus for text selection
US86595691 Aug 201225 Feb 2014Blackberry LimitedPortable electronic device including touch-sensitive display and method of controlling same
US20120050188 *30 Dec 20101 Mar 2012Telefonaktiebolaget L M Ericsson (Publ)Method And System For Input Precision
US20130090916 *5 Oct 201111 Apr 2013Daniel M. WangSystem and Method for Detecting and Correcting Mismatched Chinese Character
Classifications
U.S. Classification715/257
International ClassificationG06F17/00
Cooperative ClassificationG06F3/0237, G06F17/273, G06F17/276
European ClassificationG06F17/27C, G06F3/023M8, G06F17/27P
Legal Events
DateCodeEventDescription
23 Jan 2014ASAssignment
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TEGIC INC.;REEL/FRAME:032122/0269
Effective date: 20131118
Owner name: NUANCE COMMUNICATIONS, INC., MASSACHUSETTS
31 Jan 2008ASAssignment
Owner name: TEGIC COMMUNICATIONS, INC., WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KAY, DAVID;UNRUH, ERLAND;TANDON, GAURAV;REEL/FRAME:020452/0424
Effective date: 20080125