WO1995034035A1 - Method of training neural networks used for speech recognition - Google Patents

Method of training neural networks used for speech recognition Download PDF

Info

Publication number
WO1995034035A1
WO1995034035A1 PCT/US1995/005002 US9505002W WO9534035A1 WO 1995034035 A1 WO1995034035 A1 WO 1995034035A1 US 9505002 W US9505002 W US 9505002W WO 9534035 A1 WO9534035 A1 WO 9534035A1
Authority
WO
WIPO (PCT)
Prior art keywords
data block
data
frames
sequence
data blocks
Prior art date
Application number
PCT/US1995/005002
Other languages
French (fr)
Inventor
Shay-Ping Thomas Wang
Original Assignee
Motorola Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Inc. filed Critical Motorola Inc.
Priority to AU24270/95A priority Critical patent/AU2427095A/en
Priority to CA002190631A priority patent/CA2190631C/en
Priority to DE19581663T priority patent/DE19581663T1/en
Priority to GB9625250A priority patent/GB2303237B/en
Publication of WO1995034035A1 publication Critical patent/WO1995034035A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/245Classification techniques relating to the decision surface
    • G06F18/2453Classification techniques relating to the decision surface non-linear, e.g. polynomial classifier
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/10Speech classification or search using distance or distortion measures between unknown speech and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

A speech-recognition system for recognizing isolated words includes pre-processing circuitry (3) for performing analog-to-digital conversion and cepstral analysis, and a plurality of neural networks (12-14) which compute discriminant functions based on polynomial expansions. The system may be implemented using either hardware or software or a combination thereof. The speech waveform of a spoken word is analyzed and converted into a sequence of data frames. The sequence of frames is partitioned into data blocks, and the data blocks are then broadcast to a plurality of neural networks. Using the data blocks, the neural networks compute polynomial expansions. The output of the neural networks is used to determine the identity of the spoken word. The neural networks utilize a training algorithm which does not require repetitive training and which yields a global minimum to each given set of training examples.

Description

METHOD OF TRAINING NEURAL NETWORKS USED FOR SPEECH
RECOGNITION Related Inventions
The present invention is related to the following inventions which are assigned to the same assignee as the present invention:
(1) "Neural Network and Method of Using Same", having Serial No. 08/076,601, filed June 14, 1993;
(2) "Neural Network Utilizing Logarithmic Function and Method of Using Same", having Serial No. 08/176,601, filed January 3, 1994;
(3) "Speech-Recognition System Utilizing Neural
Networks and Method of Using Same", having Serial No.
_____,____, filed on even date herewith; and
(4) "Method of Partitioning a Sequence of Data
Frames", having Serial No. ___, ___, filed on even date herewith.
The subject matter of the above-identified related inventions is hereby incorporated by reference into the disclosure of this invention. Technical Field
This invention relates generally to speech-recognition devices, and, in particular, to a method of training neural networks used in a speech-recognition system which is capable of speaker-independent, isolated word recognition.
Background of the Invention
For many years, scientists have been trying to find a means to simplify the interface between man and machine. Input devices such as the keyboard, mouse, touch screen, and pen are currently the most commonly used tools for implementing a man/machine interface. However, a simpler and more natural interface between man and machine may be human speech. A device which automatically recognizes speech would provide such an interface.
Potential applications for an automated speech- recognition device include a database query technique using voice commands, voice input for quality control in a manufacturing process, a voice-dial cellular phone which would allow a driver to focus on the road while dialing, and a voice-operated prosthetic device for the physically disabled.
Unfortunately, automated speech recognition is not a trivial task. One reason is that speech tends to vary considerably from one person to another. For instance, the same word uttered by several persons may sound
significantly different due to differences in accent, speaking speed, gender, or age. In addition to speaker variability, co-articulation effects, speaking modes
(shout/whisper), and background noise present enormous problems to speech-recognition devices.
Since the late 1960's, various methodologies have been introduced for automated speech recognition. While some methods are based on extended knowledge with corresponding heuristic strategies, others rely on speech databases and learning methodologies. The latter methods include dynamic time-warping (DTW) and hidden-Markov modeling (HMM). Both of these methods, as well as the use of time-delay neural networks (TDNN), are discussed below.
Dynamic time-warping is a technique which uses an optimization principle to minimize the errors between an unknown spoken word and a stored template of a known word. Reported data shows that the DTW technique is very robust and produces good recognition. However, the DTW technique is computationally intensive. Therefore, it is impractical to implement the DTW technique for real-world applications.
Instead of directly comparing an unknown spoken word to a template of a known word, the hidden-Markov modeling technique uses stochastic models for known words and compares the probability that the unknown word was generated by each model. When an unknown word is uttered, the HMM technique will check the sequence (or state) of the word, and find the model that provides the best match. The HMM technique has been successfully used in many commercial applications; however, the technique has many drawbacks. These drawbacks include an inability to differentiate acoustically similar words, a susceptibility to noise, and computational intensiveness.
Recently, neural networks have been used for problems that are highly unstructured and otherwise intractable, such as speech recognition. A time-delay neural network is a type of neural network which addresses the temporal effects of speech by adopting limited neuron connections. For limited word recognition, a TDNN shows slightly better result than the HMM method. However, a TDNN suffers from some serious drawbacks.
First, the training time for a TDNN is very lengthy, on the order of several weeks. Second, the training algorithm for a TDNN often converges to a local minimum, which is not the optimum solution. The optimum solution would be a global minimum.
In summary, the drawbacks of existing known methods of automated speech-recognition (e.g. algorithms requiring impractical amounts of computation, limited tolerance to speaker variability and background noise, excessive training time, etc.) severely limit the acceptance and proliferation of speech-recognition devices in many potential areas of utility. There is thus a significant need for an automated speech-recognition system which provides a high level of accuracy, is immune to background noise, does not require repetitive training or complex computations, produces a global minimum, and is insensitive to differences in speakers. Summary of Invention
It is therefore an advantage of the present invention to provide a method of training neural networks used in a speech-recognition system which is insensitive to
differences in speakers or to background noise.
It is a further advantage of the present invention to provide a method of training a speech-recognition device which does not require repetitive iterations of training epochs.
It is also an advantage of the present invention to provide a method of training a speech-recognition device which yields a global minimum to each given set of training vectors.
These and other advantages are achieved in accordance with a preferred embodiment of the invention by providing a method of training a plurality of neural networks used in a speech-recognition system, each of the neural networks comprising a plurality of neurons, the method producing a plurality of training examples wherein each of the training examples comprises an input portion and an output portion, the method comprising the following steps: (a) receiving an example spoken word; (b) performing analog-to-digital conversion of the spoken word, the conversion producing a digitized word; (c) performing cepstral analysis of the digitized word, the analysis producing a sequence of data frames; (d) generating a plurality of data blocks from the sequence of data frames; (e) selecting one of the plurality of data blocks and equating the input portion of one of the plurality of training examples to the selected data block; (f) selecting one of the plurality of neural networks and determining if the selected neural network is to recognize the selected data block; if so, setting the output portion of the one training example to one; if not, setting the output portion of the one training example to zero; (g) saving the one training example; (h) determining if there is another one of the plurality of data blocks; if so, returning to step (e); if not, ending the method. Brief Description of the Drawings
The invention is pointed out with particularity in the appended claims. However, other features of the invention will become more apparent and the invention will be best understood by referring to the following detailed
description in conjunction with the accompanying drawings in which:
FIG. 1 shows a contextual block diagram of a speech- recognition system.
FIG. 2 shows a conceptual diagram of a speech- recognition system which utilizes the present invention.
FIG. 3 shows a flow diagram of a method of operating the speech-recognition system illustrated in FIG. 2.
FIG. 4 illustrates data inputs and outputs of a divide-and-conquer algorithm of the present invention.
FIG. 5 shows a flow diagram of a method of executing a divide-and-conquer algorithm of the present invention.
FIG. 6 shows a flow diagram of a method of training a neural network to recognize speech in accordance with a preferred embodiment of the present invention.
Detailed Description of a Preferred Embodiment
FIG. 1 shows a contextual block diagram of a speech- recognition system. The system comprises a microphone 1 or equivalent means for receiving audio input in the form of speech input and converting sound into electrical energy, pre-processing circuitry 3 which receives electrical signals from microphone 1 and performs various tasks such as wave-form sampling, analog-to-digital (A/D) conversion, cepstral analysis, etc., and a computer 5 which executes a program for recognizing speech and accordingly generates an output identifying the recognized speech.
The operation of the system commences when a user speaks into microphone 1. In a preferred embodiment, the system depicted by FIG. 1 is used for isolated word recognition. Isolated word recognition takes place when a person speaking into the microphone makes a distinct pause between each word.
When a speaker utters a word, microphone 1 generates a signal which represents the acoustic wave-form of the word. This signal is then fed to pre-processing circuitry 3 for digitization by means of an A/D converter (not shown). The digitized signal is then subjected to cepstral analysis, a method of feature extraction, which is also performed by pre-processing circuitry 3. Computer 5 receives the results of the cepstral analysis and uses these results to determine the identity of the spoken word.
The following is a more detailed description of the pre-processing circuitry 3 and computer 5. Pre-processing circuitry 3 may include a combination of hardware and software components in order to perform its tasks. For example, the A/D conversion may be performed by a
specialized integrated circuit, while the cepstral analysis may be performed by software which is executed on a microprocessor.
Pre-processing circuitry 3 includes appropriate means for A/D conversion. Typically, the signal from microphone 1 is an analog signal. An A/D converter (not shown) samples the signal from microphone 1 several thousand times per second (e.g. between 8000 and 14,000 times per second in a preferred embodiment). Each of the samples is then converted to a digital word, wherein the length of the word is between 12 and 32 bits. The digitized signal comprises one or more of these digital words. Those of ordinary skill in the art will understand that the sampling rate and word length of A/D converters may vary and that the numbers given above do not place any limitations on the sampling rate or word length of the A/D converter which is included in the present invention.
The cepstral analysis, or feature extraction, which is performed on the digitized signal, results in a
representation of the signal which characterizes the relevant features of the spoken speech. It can be regarded as a data reduction procedure that retains vital
characteristics of the speech and eliminates undesirable interference from irrelevant characteristics of the digitized signal, thus easing the decision-making process of computer 5.
The cepstral analysis is performed as follows. First, the digitized samples, which make up the digitized signal, are divided into a sequence of sets. Each set includes samples taken during an interval of time which is of fixed duration. To illustrate, in a preferred embodiment of the present invention the interval of time is 15 milliseconds. If the duration of a spoken word is, for example, 150 milliseconds, then circuitry 3 will produce a sequence of ten sets of digitized samples.
Next, a p-th order (typically p = 12 to 14) linear prediction analysis is applied on each set of samples to yield p prediction coefficients. The prediction
coefficients are then converted into cepstrum coefficients using the following recursion formula:
Figure imgf000009_0001
wherein c(n) represents the vector of cepstrum coefficients, a(n) represents the prediction coefficients, 1 ≤ n ≤ p, p is equal to the number of cepstrum
coefficients, n represents an integer index, and k
represents integer index. a(k) represents the kth
prediction coefficient and c (n - k) represents the (n - k)th cepstrum coefficient.
The vector of cepstrum coefficients is usually weighted by a sine window of the form, α(n) = 1 + (L/2) sin(πn/L) Equation (2) wherein 1 ≤ n ≤ p, and L is an integer constant, giving the weighted cepstrum vector, C(n), wherein C(n) = c(n) α(n) Equation (3)
This weighting is commonly referred to cepstrum liftering. The effect of this liftering process is to smooth the spectral peaks in the spectrum of the speech sample. It has also been found that cepstrum liftering suppresses the existing variations in the high and low cepstrum coefficients, and thus considerably improves the performance of the speech-recognition system.
Thus, the result of the cepstral analysis is a sequence of smoothed log spectra wherein each spectrum corresponds to a discrete time interval from the period during which the word was spoken .
The significant features of the speech signal are thus preserved in the spectra. For each spectrum, pre- processing circuitry 3 generates a respective data frame which comprises data points from the spectrum. The generation of a data frame per spectrum results in a time- ordered sequence of data frames. This sequence is passed to computer 5.
In a preferred embodiment, each data frame contains twelve data points, wherein each of the data points represents the value of cepstrally-smoothed spectrum at a specific frequency. The data points are 32-bit digital words. Those skilled in the art will understand that the present invention places no limits on the number of data points per frame or the bit length of the data points; the number of data points contained in a data frame may be twelve or any other appropriate value, while the data point bit length may be 32 bits, 16 bits, or any other value.
The essential function of computer 5 is to determine the identity of the word which was spoken. In a preferred embodiment of the present invention, computer 5 may include a partitioning program for manipulating the sequence of data frames, a plurality of neural networks for computing polynomial expansions, and a selector which uses the outputs of the neural networks to classify the spoken word as a known word. Further details of the operation of computer 5 are given below.
FIG. 2 shows a conceptual diagram of a speech- recognition system which utilizes the present invention.
In a preferred embodiment, the speech-recognition system recognizes isolated spoken words. A microphone 1 receives speech input from a person who is speaking, and converts the input into electrical signals. The electrical signals are fed to pre-processing circuitry 3.
Pre-processing circuitry 3 performs the functions described above regarding FIG. 1. Circuitry 3 performs A/D conversion and cepstral analysis, and circuitry 3 may include a combination of hardware and software components in order to perform its tasks. The output of preprocessing circuitry 3 takes the form of a sequence of data frames which represent the spoken word. Each data frame comprises a set of data points (32-bit words) which correspond to a discrete time interval from the period during which the word was spoken. The output of circuitry 3 is transmitted to computer 5.
Computer 5 may be a general-purpose digital computer or a specific-purpose computer. Computer 5 comprises suitable hardware and/or software for performing a divide- and-conquer algorithm 11. Computer 5 further comprises a plurality of neural networks represented by 1st Neural Network 12, 2nd Neural Network 13, and Nth Neural Network 14. The output of each neural network 12, 13, and 14 is fed into a respective accumulator 15, 16, and 17. The outputs of accumulators 15-17 are fed into a selector 18, whose output represents the recognized speech word.
Divide-and-conquer algorithm 11 receives the sequence of data frames from pre-processing circuitry 3, and from the sequence of data frames it generates a plurality of data blocks. In essence, algorithm 11 partitions the sequence of data frames into a set of data blocks, each of which comprises a subset of data frames from the input sequence. The details of the operation of divide-and- conquer algorithm 11 are given below in the section entitled "Divide-and-Conquer Algorithm". In a preferred embodiment, each of four data blocks includes five data frames from the input sequence.
The first data block comprises the first data frame and every fourth data frame thereafter appearing in the sequence of data frames. The second data block comprises the second data frame and every fourth data frame
thereafter in the sequence. And so on, successive data frames being allocated to each of the four data blocks, in turn, until each data block contains the same number of data frames. If the number of data frames turns out to be insufficient to provide each block with an identical number of data frames, then the last data frame in the sequence is copied into the remaining data blocks, so that each contains the same number of data frames.
A means for distributing the data blocks is used to transfer the data blocks from algorithm 11 to the inputs of neural networks 12, 13, and 14. In turn, each data block is transferred simultaneously to neural networks 12, 13, and 14. While FIG. 2 shows only three neural networks in the speech-recognition system, it will be understood by one of ordinary skill that any number of neural network may be used if a particular application requires more or less than three neural networks.
It will be apparent to one of ordinary skill that each neural network comprises a plurality of neurons.
In a preferred embodiment of the present invention, each of the neural networks may have been previously trained to recognize a specific set of speech phonemes. Generally, a spoken word comprises one or more speech phonemes.
Neural networks 12, 13, and 14 act as classifiers that determine which word was spoken, based on the data blocks. In general, a classifier makes a decision as to which class an input pattern belongs. In a preferred embodiment of the present invention, each class is labeled with a known word, and data blocks are obtained from a predefined set of spoken words (the training set) and used to determine boundaries between the classes, boundaries which maximize the recognition performance for each class.
In a preferred embodiment, a parametric decision method is used to determine whether a spoken word belongs to a certain class. With this method, each neural network computes a different discriminant function yj (X), wherein X = {x1, x2, . . . , xi} is the set of data points contained in a data block, i is an integer index, and j is an integer index corresponding to the neural network. Upon receiving a data block, the neural networks compute their respective discriminant functions. If the discriminant function computed by a particular neural network is greater than the discriminant function of each of the other networks, then the data block belongs to the particular class
corresponding to the neural network.
In other words, each neural network defines a different class; thus, each neural network recognizes a different word. For example, neural network 12 may be trained to recognize the word "one", neural network 13 may be trained to recognize the word "two", and so forth. The method of training the neural networks is described below in the section entitled "Neural Network Training".
The discriminant functions computed by the neural networks of the present invention are based upon the use of a polynomial expansion and, in a loose sense, the use of an orthogonal function, such as a sine, cosine,
exponential/logarithmic, Fourier transformation, Legendre polynomial, non-linear basis function such as a Volterra function or a radial basis function, or the like, or a combination of polynomial expansion and orthogonal
functions.
A preferred embodiment employs a polynomial expansion of which the general case is represented by Equation 4 as follows: )
Figure imgf000014_0001
wherein xi represent the co-processor inputs and can be a function such as xi = fi(zj), wherein zj is any arbitrary variable, and wherein the indices i and j may be any positive integers; wherein y represents the output of the neural network co-processor; wherein wi-1 represent the weight for the ith neuron; wherein g1i, . . ., gni
represent gating functions for the ith neuron and are integers, being 0 or greater in a preferred embodiment; and n is the number of co-processor inputs.
Each term of Equation 4 expresses a neuron output and the weight and gating functions associated with such neuron. The number of terms of the polynomial expansion to be used in a neural network is based upon a number of factors, including the number of available neurons, the number of training examples, etc. It should be understood that the higher order terms of the polynomial expansion usually have less significance than the lower order terms. Therefore, in a preferred embodiment, the lower order terms are chosen whenever possible, based upon the various factors mentioned above. Also, because the unit of measurement associated with the various inputs may vary, the inputs may need to be normalized before they are used.
Equation 5 is an alternative representation of
Equation 4, showing terms up to the third order terms.
Figure imgf000015_0001
Figure imgf000016_0001
Equation (5) wherein the variables have the same meaning as in Equation 4 and wherein fl(i) is an index function in the range of n+1 to 2n; f2(i,j) is an index function in the range of 2n+l to
2n+(n) (n-1)/2; and f3(i,j) is in the range of 2n+1+(n) (n-1)/2 to 3n+(n) (n-1)/2. And f4 through f6 are represented in a similar fashion.
Those skilled in the art will recognize that the gating functions are embedded in the terms expressed by Equation 5. For example, Equation 5 can be represented as follows:
y = w0 + w1 x1 + w2 x2 + . . . wi xi + . . . + wn xn
+ wn+1 x12 + w2n xn 2
+ w2n+1 x1 x2 + w2n+2 x1 x3 + . . .
+ w3n-1 xl xn + w3n x2 x3 + w3n+1 x2 x4 +
. . . w2n+(n) (n-1)/2 xn-1 xn + . . . + wN-1 x1g1N x2g2N . . . xn9nN + . . .
Equation (6) wherein the variables have the same meaning as in Equation 4.
It should be noted that although the gating function terms gin explicitly appear only in the last shown term of Equation 6, it will be understood that each of the other terms has its giN term explicitly shown (e.g. for the w1 x1 term gi2=1 and the other gi2=0, i=2,3,...,n). N is any
positive integer and represents the Nth neuron in the
network. In the present invention, a neural network will generate an output for every data block it receives. Since a spoken word may be represented by a sequence of data blocks, each neural network may generate a sequence of outputs. To enhance the classification performance of the speech-recognition system, each sequence of outputs is summed by an accumulator.
Thus an accumulator is attached to the output of each neural network. As described above regarding FIG. 2, accumulator 15 is responsive to output from neural network 12, accumulator 16 is responsive to output from neural network 13, and accumulator 17 is responsive to output from neural network 14. The function of an accumulator is to sum the sequence of outputs from a neural network. This creates a sum which corresponds to the neural network, and thus the sum corresponds to a class which is labeled by a known word. Accumulator 15 adds each successive output from neural network 12 to an accumulated sum, and
accumulators 16 and 17 perform the same function for neural networks 13 and 14, respectively. Each accumulator presents its sum as an output.
Selector 18 receives the sums from the accumulators either sequentially or concurrently. In the former case, selector 18 receives the sums in turn from each of the accumulators, for example, receiving the sum from
accumulator 15 first, the sum from accumulator 16 second, and so on; or, in the latter case, selector 18 receives the sums from accumulators 15, 16, and 17 concurrently. After receiving the sums, selector 18 then determines which sum is largest and assigns the corresponding known word label, i.e. the recognized speech word, to the output of the speech-recognition system.
FIG. 3 shows a flow diagram of a method of operating the speech-recognition system illustrated in FIG. 2. In box 20, a spoken word is received from the user by
microphone 1 and converted to an electrical signal. In box 22, A/D conversion is performed on the speech signal. In a preferred embodiment, A/D conversion is performed by pre-processing circuitry 9 of FIG. 2.
Next, in box 24, cepstral analysis is performed on the digitized signal resulting from the A/D conversion. The cepstral analysis is, in a preferred embodiment, also performed by pre-processing circuitry 9 of FIG. 2. The cepstral analysis produces a sequence of data frames which contain the relevant features of the spoken word.
In box 26, a divide-and-conquer algorithm, the steps of which are shown in FIG. 5, is used to generate a plurality of data blocks from the sequence of data frames. The divide-and-conquer algorithm is a method of
partitioning the sequence of frames into a set of smaller, more manageable data blocks.
In box 28, one of the data blocks is broadcast to the neural networks. Upon exiting box 28, the procedure continues to box 30.
In box 30, each of the neural networks uses the data block in computing a discriminant function which is based on a polynomial expansion. A different discriminant function is computed by each neural network and generated as an output. The discriminant function computed by a neural network is determined prior to operating the speech- recognition system by using the method of training the neural network as shown in FIG. 6.
In box 32, the output of each neural network is added to a sum, wherein there is one sum generated for each neural network. This step generates a plurality of neural network sums, wherein each sum corresponds to a neural network.
In decision box 34, a check is made to determine whether there is another data block to be broadcast to the neural networks. If so, the procedure returns to box 28. If not, the procedure proceeds to box 36.
Next, in box 36, the selector determines which neural network sum is the largest, and assigns the known word label which corresponds to the sum as the output of the speech-recognition system. DIVIDE-AND-CONQUER ALGORITHM
FIG. 4 illustrates data inputs and outputs of a divide-and-conquer algorithm of the present invention. The divide-and-conquer algorithm is a method of partitioning the sequence of data frames into a set of smaller data blocks. The input to the algorithm is the sequence of data frames 38, which, in the example illustrated, comprises data frames 51-70. The sequence of data frames 38 contains data which represents the relevant features of a speech sample.
In a preferred embodiment, each data frame contains twelve data points, wherein each of the data points represents the value of a cepstrum coefficient or a function which is based on a cepstrum coefficient.
The data points are 32-bit digital words. Each data frame corresponds to a discrete time interval from the period during which the speech sample was spoken.
Those skilled in the art will understand that the present invention places no limits on the number of data points per frame or the bit length of the data points; the number of data points contained in a data frame may be twelve or any other value, while the data point bit length may be 32 bits, 16 bits, or any other value.
Additionally, the data points may be used to represent data other than values from a cepstrally-smoothed spectral envelope. For example, in various applications, each data point may represent a spectral amplitude at a specific frequency.
The divide-and-conquer algorithm 11 receives each frame of the speech sample sequentially and assigns the frame to one of several data blocks. Each data block comprises a subset of data frames from the input sequence of frames. Data blocks 42, 44, 46, and 48 are output by the divide-and-conquer algorithm 11. Although FIG. 4 shows the algorithm generating only four data blocks, the divide- and-conquer algorithm 11 is not limited to generating only four data blocks and may be used to generate either more or less than four blocks.
FIG. 5 shows a flow diagram of a method of executing a divide-and-conquer algorithm of the present invention. The divide-and-conquer algorithm partitions a sequence of data frames into a set of data blocks according to the following steps .
As illustrated in box 75, the number of data blocks to be generated by the algorithm is first calculated. The number of data blocks to be generated is calculated in the following manner. First, the number of frames per data block and the number of frames in the sequence are
received. Both the number of blocks and the number of frames are integers. Second, the number of frames is divided by the number of frames per block. Next, the result of the division operation is rounded up to the nearest integer, resulting in the number of data blocks to be generated by the divide-and-conquer algorithm. Upon exiting box 75, the procedure continues in box 77.
In box 77 the first frame of the sequence of frames is equated to a variable called the current frame . It will be apparent to one of ordinary skill that the current frame could be represented by either a software variable or, in hardware, as a register or memory device.
Next in box 79, a current block variable is equated to the first block. In software the current block may be a software variable which represents a data block. In a hardware implementation the current block may be one or more registers or memory devices. After the current block is equated to the first block, the current frame is assigned to the current block. The procedure then proceeds to decision box 81.
Next, as illustrated by decision box 81, a check is made to determine whether or not there are more frames from the sequence of frames to be processed. If so, the procedure continues to box 83. If not, the procedure jumps to box 91.
In box 83, the next frame from the sequence of frames is received and equated to the current frame variable.
In box 85, the current block variable is equated to the next block, and then the current frame variable is assigned to the current block variable. Upon exiting box 85, the procedure proceeds to decision box 87.
As illustrated in decision box 87, if the current block variable is equal to the last block, then the procedure continues to box 89, otherwise the procedure returns to box 81.
In box 89, the next block is set equal to the first block, and upon exiting box 89 the procedure returns to decision box 81.
Box 91 is entered from decision box 81. In box 91, a check is made to determine if the current block variable is equal to the last block. If so, the procedure terminates. If not, the current frame is assigned to each of the remaining data blocks which follow the current block, up to and including the last block, as previously explained above in the description of FIG. 2. Training Algorithm
The speech-recognition system of the present invention has principally two modes of operation: (1) a training mode in which examples of spoken words are used to train the neural networks, and (2) a recognition mode in which unknown spoken words are identified. Referring to FIG. 2, generally, the user must train neural networks 12, 13, and 14 by speaking into microphone 1 all of the words that the system is to recognize. In some cases the training may be limited to several users speaking each word once. However, those skilled in the art will realize that the training may require any number of different speakers uttering each word more than once.
For a neural network to be useful, the weights of each neuron circuit must be determined. This can be
accomplished by the use of an appropriate training
algorithm.
In implementing a neural network of the present invention, one generally selects the number of neurons or neuron circuits to be equal to or less than the number of training examples presented to the network.
A training example is defined as one set of given inputs and resulting output (s). In a preferred embodiment of the present invention, each word spoken into microphone 1 of FIG. 2 generates at least one training example.
For a preferred embodiment of the present invention, the training algorithm used for the neural networks is shown in FIG. 6.
FIG. 6 shows a flow diagram of a method of training a neural network to recognize speech in accordance with a preferred embodiment of the present invention. First, regarding box 93, an example of a known word is spoken into a microphone of the speech-recognition system.
In box 95, A/D conversion is performed on the speech signal. Cepstral analysis is performed on the digitized signal which is output from the A/D conversion. The cepstral analysis produces a sequence of data frames which contain the relevant features of the spoken word. Each data frame comprises twelve 32-bit words which represent the results of the cepstral analysis of a time slice of the spoken word. In a preferred embodiment, the duration of the time slice is 15 milliseconds.
Those skilled in the art will understand that the present invention places no limit on the bit length of the words in the data frames; the bit length may be 32 bits, 16 bits, or any other value. In addition, the number of words per data frame and the duration of the time slice may vary, depending on the intended application of the present invention.
Next, in box 97, a divide-and-conquer algorithm (the steps of which are shown in FIG. 5) is used to generate a plurality of blocks from the sequence of data frames.
In box 99, one of the blocks generated by the divide- and-conquer algorithm is selected. The input portion of a training example is set equal to the select block.
In box 101, if the neural network is being trained to recognize the selected block, then the output portion of the block is set to one, otherwise it is set to zero. Upon exiting box 101 the procedure continues with box 103.
Next, in box 103, the training example is saved in memory of computer 5 (FIGS. 1 and 2). This allows a plurality of training examples to be generated and stored.
In decision box 105, a check is made to determine if there is another data block, generated from the current sequence of data frames, to be used in training the neural network. If so, the procedure returns to box 99. If not, the procedure proceeds to decision box 107.
In decision box 107, a determination is made to see if there is another spoken word to be used in the training session. If so, the procedure returns to box 93. If not, the procedure continues to box 109.
In box 109, a comparison is made between the number of training examples provided and the number of neurons in the neural network. If the number of neurons is equal to the number of training examples, a matrix-inversion technique may be employed to solve for the value of each weight. If the number of neurons is not equal to the number of training examples, a least-squares estimation technique is employed to solve for the value of each weight. Suitable least-squares estimation techniques include, for example, least-squares, extended least-squares, pseudo-inverse, Kalman filter, maximum-likelihood algorithm, Bayesian estimation, and the like. Summary
There has been described herein a concept, as well as several embodiments including a preferred embodiment, of a method of training neural networks utilized in speech recognition.
Because the various embodiments of the speech- recognition system as herein described utilize a divide- and-conquer algorithm to partition speech samples, and a suitable method for training neural networks which operate upon the outputs generated by the divide-and-conquer algorithm, such embodiments are insensitive to differences in speakers and not adversely affected by background noise.
It will also be appreciated that the various
embodiments of the speech-recognition system as described herein include a neural network which does not require repetitive training and which yields a global minimum to each given set of input vectors; thus, the embodiments of the present invention require substantially less training time and are significantly more accurate than known speech- recognition systems.
Furthermore, it will be apparent to those skilled in the art that the disclosed invention may be modified in numerous ways and may assume many embodiments other than the preferred form specifically set out and described above.
It will be understood that the concept of the present invention can vary in many ways. Accordingly, it is intended by the appended claims to cover all modifications of the invention which fall within the true spirit and scope of the invention.

Claims

CLAIMS 1. A method of training a plurality of neural networks used in a speech-recognition system, each of said neural networks comprising a plurality of neurons, said method producing a plurality of training examples wherein each of said training examples comprises an input portion and an output portion, said method comprising the following steps:
(a) receiving an example spoken word;
(b) performing analog-to-digital conversion of said spoken word, said conversion producing a digitized word;
(c) performing cepstral analysis of said digitized word, said analysis producing a sequence of data frames;
(d) generating a plurality of data blocks from said sequence of data frames;
(e) selecting one of said plurality of data blocks and equating said input portion of one of said plurality of training examples to said selected data block;
(f) selecting one of said plurality of neural networks and determining if said selected neural network is to recognize said selected data block;
(i) if so, setting said output portion of said one training example to one;
(ii) if not, setting the output portion of said one training example to zero;
(g) saving said one training example;
(h) determining if there is another one of said plurality of data blocks;
(i) if so, returning to step (e);
(ii) if not, ending said method.
2. The method recited in claim 1, wherein in step (d) said plurality of data blocks is generated by sub-steps (d1) through (d11) as follows:
(d1) representing said speech example as a sequence of frames;
(d2) equating a current frame to a first one of said sequence of frames;
(d3) assigning said current frame to a first data block of said plurality of data blocks;
(d4) equating a current data block to said first data block;
(d5) determining whether there is a next frame in said sequence of frames;
(i) if so, proceeding to step (d6);
(ii) if not, proceeding to step (d11);
(d6) equating said current frame to said next frame; (d7) equating said current data block to the next data block of said plurality of data blocks;
(d8) assigning said current frame to said current data block;
(d9) determining if said current data block is the last one of said plurality of data blocks;
(i) if so, proceeding to step (d10);
(ii) if not, returning to step (d8);
(d10) equating said next data block to said first data block, and returning to step (d8) ; and
(d11) determining if said current data block is said last data block;
(i) if not, assigning said current frame to the remaining ones of said plurality of data blocks.
3. The method recited in claim 1, wherein in step (d) said plurality of data blocks is generated by sub-steps (d1) through (d14) as follows:
(d1) determining the number of frames in said sequence of frames which represent said speech sample;
(d2) defining the number of frames per data block; and (d3) calculating the number of data blocks to be generated;
(d4) representing said speech example as a sequence of frames;
(d5) equating a current frame to a first one of said sequence of frames;
(d6) assigning said current frame to a first data block of said plurality of data blocks;
(d7) equating a current data block to said first data block;
(dδ) determining whether there is a next frame in said sequence of frames;
(i) if so, proceeding to step (d9);
(ii) if not, proceeding to step (d14);
(d9) equating said current frame to said next frame; (d10) equating said current data block to the next data block of said plurality of data blocks;
(d11) assigning said current frame to said current data block;
(d12) determining if said current data block is the last one of said plurality of data blocks;
(i) if so, proceeding to step (d13);
(ii) if not, returning to step (d8);
(d13) equating said next data block to said first data block, and returning to step (d8); and
(d14) determining if said current data block is said last data block;
(i) if not, assigning said current frame to the remaining ones of said plurality of data blocks.
4. The method recited in claim 3, wherein in step (d3) said number of data blocks is calculated by substeps (d31) and (d32) as follows:
(d31) dividing said number of frames by said frames per data block producing a result; and
(d32) rounding up said result to the nearest integer.
5. The method recited in claim 1, wherein each frame in said sequence of frames comprises a plurality of data points, wherein each of said data points represents the value of a function which is based on cepstrum
coefficients.
6. The method recited in claim 1, wherein the operation of each of said neural networks is based upon a polynomial expansion.
7. The method recited in claim 6, wherein said polynomial expansion has the form:
Figure imgf000028_0001
wherein y represents the output of the neural network; wherein wi-i represents the weight value for the ith neuron;
wherein x1, x2, . . . , xn represent inputs to said neural network;
wherein gli, . . ., gni represent gating functions for the ith neuron which are applied to said inputs; and
wherein n is a positive intgger.
8. The method recited in claim 7, wherein each xi is represented by the function xi = fi(zj), wherein zj is any arbitrary variable, and wherein the indices i and j are any positive integers.
9. The method recited in claim 7, wherein the operation of each of said neural networks is based upon a truncated version of said polynomial expansion.
10. The method recited in claim 1, wherein every training example is used only once by said method.
PCT/US1995/005002 1994-06-03 1995-04-25 Method of training neural networks used for speech recognition WO1995034035A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
AU24270/95A AU2427095A (en) 1994-06-03 1995-04-25 Method of training neural networks used for speech recognition
CA002190631A CA2190631C (en) 1994-06-03 1995-04-25 Method of training neural networks used for speech recognition
DE19581663T DE19581663T1 (en) 1994-06-03 1995-04-25 Procedure for training neural networks that are used for speech recognition
GB9625250A GB2303237B (en) 1994-06-03 1995-04-25 Method of training neural networks used for speech recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/253,893 1994-06-03
US08/253,893 US5509103A (en) 1994-06-03 1994-06-03 Method of training neural networks used for speech recognition

Publications (1)

Publication Number Publication Date
WO1995034035A1 true WO1995034035A1 (en) 1995-12-14

Family

ID=22962136

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1995/005002 WO1995034035A1 (en) 1994-06-03 1995-04-25 Method of training neural networks used for speech recognition

Country Status (7)

Country Link
US (1) US5509103A (en)
CN (1) CN1151218A (en)
AU (1) AU2427095A (en)
CA (1) CA2190631C (en)
DE (1) DE19581663T1 (en)
GB (1) GB2303237B (en)
WO (1) WO1995034035A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998037543A1 (en) * 1997-02-25 1998-08-27 Motorola Inc. Method and apparatus for training a speaker recognition system
EP1688872A2 (en) * 2005-02-04 2006-08-09 Bernard Angeniol Informatics tool for prediction
EP2221805A1 (en) * 2009-02-20 2010-08-25 Harman Becker Automotive Systems GmbH Method for automated training of a plurality of artificial neural networks

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5697369A (en) * 1988-12-22 1997-12-16 Biofield Corp. Method and apparatus for disease, injury and bodily condition screening or sensing
US5749072A (en) * 1994-06-03 1998-05-05 Motorola Inc. Communications device responsive to spoken commands and methods of using same
US5621848A (en) * 1994-06-06 1997-04-15 Motorola, Inc. Method of partitioning a sequence of data frames
US5724486A (en) * 1995-08-21 1998-03-03 Motorola Inc. Method for structuring an expert system utilizing one or more polynomial processors
US5745874A (en) * 1996-03-04 1998-04-28 National Semiconductor Corporation Preprocessor for automatic speech recognition system
US5905789A (en) * 1996-10-07 1999-05-18 Northern Telecom Limited Call-forwarding system using adaptive model of user behavior
US6167117A (en) * 1996-10-07 2000-12-26 Nortel Networks Limited Voice-dialing system using model of calling behavior
US5917891A (en) * 1996-10-07 1999-06-29 Northern Telecom, Limited Voice-dialing system using adaptive model of calling behavior
US5912949A (en) * 1996-11-05 1999-06-15 Northern Telecom Limited Voice-dialing system using both spoken names and initials in recognition
US5995924A (en) * 1997-05-05 1999-11-30 U.S. West, Inc. Computer-based method and apparatus for classifying statement types based on intonation analysis
US6192353B1 (en) * 1998-02-09 2001-02-20 Motorola, Inc. Multiresolutional classifier with training system and method
US6131089A (en) * 1998-05-04 2000-10-10 Motorola, Inc. Pattern classifier with training system and methods of operation therefor
US7006969B2 (en) * 2000-11-02 2006-02-28 At&T Corp. System and method of pattern recognition in very high-dimensional space
US7369993B1 (en) 2000-11-02 2008-05-06 At&T Corp. System and method of pattern recognition in very high-dimensional space
WO2002091358A1 (en) * 2001-05-08 2002-11-14 Intel Corporation Method and apparatus for rejection of speech recognition results in accordance with confidence level
US7346497B2 (en) * 2001-05-08 2008-03-18 Intel Corporation High-order entropy error functions for neural classifiers
KR100486735B1 (en) * 2003-02-28 2005-05-03 삼성전자주식회사 Method of establishing optimum-partitioned classifed neural network and apparatus and method and apparatus for automatic labeling using optimum-partitioned classifed neural network
CN100446029C (en) * 2007-02-15 2008-12-24 杨志军 Signal processing circuit for intelligent robot visual identifying system
US9240184B1 (en) * 2012-11-15 2016-01-19 Google Inc. Frame-level combination of deep neural network and gaussian mixture models
US9508347B2 (en) * 2013-07-10 2016-11-29 Tencent Technology (Shenzhen) Company Limited Method and device for parallel processing in model training
CN104021373B (en) * 2014-05-27 2017-02-15 江苏大学 Semi-supervised speech feature variable factor decomposition method
US9786270B2 (en) 2015-07-09 2017-10-10 Google Inc. Generating acoustic models
US10229672B1 (en) 2015-12-31 2019-03-12 Google Llc Training acoustic models using connectionist temporal classification
US10878318B2 (en) * 2016-03-28 2020-12-29 Google Llc Adaptive artificial neural network selection techniques
WO2017197330A1 (en) * 2016-05-13 2017-11-16 Maluuba Inc. Two-stage training of a spoken dialogue system
US20180018973A1 (en) 2016-07-15 2018-01-18 Google Inc. Speaker verification
US10706840B2 (en) 2017-08-18 2020-07-07 Google Llc Encoder-decoder models for sequence to sequence mapping
CN108053025B (en) * 2017-12-08 2020-01-24 合肥工业大学 Multi-column neural network medical image analysis method and device
US11380315B2 (en) * 2019-03-09 2022-07-05 Cisco Technology, Inc. Characterizing accuracy of ensemble models for automatic speech recognition by determining a predetermined number of multiple ASR engines based on their historical performance
CN110767231A (en) * 2019-09-19 2020-02-07 平安科技(深圳)有限公司 Voice control equipment awakening word identification method and device based on time delay neural network
CN111723873A (en) * 2020-06-29 2020-09-29 南方电网科学研究院有限责任公司 Power sequence data classification method and device
CN114038465B (en) * 2021-04-28 2022-08-23 北京有竹居网络技术有限公司 Voice processing method and device and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5365592A (en) * 1990-07-19 1994-11-15 Hughes Aircraft Company Digital voice detection apparatus and method using transform domain processing
US5408588A (en) * 1991-06-06 1995-04-18 Ulug; Mehmet E. Artificial neural network method and architecture

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0435282B1 (en) * 1989-12-28 1997-04-23 Sharp Kabushiki Kaisha Voice recognition apparatus
US5212765A (en) * 1990-08-03 1993-05-18 E. I. Du Pont De Nemours & Co., Inc. On-line training neural network system for process control
FR2689292A1 (en) * 1992-03-27 1993-10-01 Lorraine Laminage Voice recognition method using neuronal network - involves recognising pronounce words by comparison with words in reference vocabulary using sub-vocabulary for acoustic word reference
DE69328275T2 (en) * 1992-06-18 2000-09-28 Seiko Epson Corp Speech recognition system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5365592A (en) * 1990-07-19 1994-11-15 Hughes Aircraft Company Digital voice detection apparatus and method using transform domain processing
US5408588A (en) * 1991-06-06 1995-04-18 Ulug; Mehmet E. Artificial neural network method and architecture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
INFORMATION THEORY AND RELIABLE COMMUNICATION, Copyright 1968, ROBERT G. GALLAGER, pages 286-291. *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998037543A1 (en) * 1997-02-25 1998-08-27 Motorola Inc. Method and apparatus for training a speaker recognition system
EP1688872A2 (en) * 2005-02-04 2006-08-09 Bernard Angeniol Informatics tool for prediction
FR2881857A1 (en) * 2005-02-04 2006-08-11 Bernard Angeniol IT TOOL FOR FORECAST
EP1688872A3 (en) * 2005-02-04 2009-12-30 Bernard Angeniol Informatics tool for prediction
EP2221805A1 (en) * 2009-02-20 2010-08-25 Harman Becker Automotive Systems GmbH Method for automated training of a plurality of artificial neural networks
US8554555B2 (en) 2009-02-20 2013-10-08 Nuance Communications, Inc. Method for automated training of a plurality of artificial neural networks

Also Published As

Publication number Publication date
DE19581663T1 (en) 1997-05-07
AU2427095A (en) 1996-01-04
GB2303237A (en) 1997-02-12
CN1151218A (en) 1997-06-04
CA2190631C (en) 2000-02-22
US5509103A (en) 1996-04-16
CA2190631A1 (en) 1995-12-14
GB9625250D0 (en) 1997-01-22
GB2303237B (en) 1997-12-17

Similar Documents

Publication Publication Date Title
CA2190631C (en) Method of training neural networks used for speech recognition
US5903863A (en) Method of partitioning a sequence of data frames
US5638486A (en) Method and system for continuous speech recognition using voting techniques
US5596679A (en) Method and system for identifying spoken sounds in continuous speech by comparing classifier outputs
US5594834A (en) Method and system for recognizing a boundary between sounds in continuous speech
US5734793A (en) System for recognizing spoken sounds from continuous speech and method of using same
US6021387A (en) Speech recognition apparatus for consumer electronic applications
US6219642B1 (en) Quantization using frequency and mean compensated frequency input data for robust speech recognition
US6347297B1 (en) Matrix quantization with vector quantization error compensation and neural network postprocessing for robust speech recognition
EP0617827B1 (en) Composite expert
JPS62231996A (en) Allowance evaluation of word corresponding to voice input
US5832181A (en) Speech-recognition system utilizing neural networks and method of using same
CN109192200A (en) A kind of audio recognition method
Devi et al. A novel approach for speech feature extraction by cubic-log compression in MFCC
Chauhan et al. Speech recognition and separation system using deep learning
US6275799B1 (en) Reference pattern learning system
JP3029803B2 (en) Word model generation device for speech recognition and speech recognition device
Nijhawan et al. Real time speaker recognition system for hindi words
Kulkarni et al. Comparison between SVM and other classifiers for SER
Mirhassani et al. Fuzzy decision fusion of complementary experts based on evolutionary cepstral coefficients for phoneme recognition
Mahkonen et al. Cascade processing for speeding up sliding window sparse classification
CN116863937A (en) Far-field speaker confirmation method based on self-distillation pre-training and meta-learning fine tuning
JP3871774B2 (en) Voice recognition apparatus, voice recognition method, and recording medium recording voice recognition program
Saraswathi et al. Implementation of Tamil speech recognition system using neural networks
Wang et al. Design and Applications of Embedded Systems for Speech Processing

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 95193415.5

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AM AT AU BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IS JP KE KG KP KR KZ LK LR LT LU LV MD MG MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TT UA UG UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE MW SD SZ UG AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2190631

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 9625250.7

Country of ref document: GB

RET De translation (de og part 6b)

Ref document number: 19581663

Country of ref document: DE

Date of ref document: 19970507

WWE Wipo information: entry into national phase

Ref document number: 19581663

Country of ref document: DE

122 Ep: pct application non-entry in european phase