WO1993017415A1 - Method for determining boundaries of isolated words - Google Patents

Method for determining boundaries of isolated words Download PDF

Info

Publication number
WO1993017415A1
WO1993017415A1 PCT/US1993/001611 US9301611W WO9317415A1 WO 1993017415 A1 WO1993017415 A1 WO 1993017415A1 US 9301611 W US9301611 W US 9301611W WO 9317415 A1 WO9317415 A1 WO 9317415A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
time
boundary
determining
value
Prior art date
Application number
PCT/US1993/001611
Other languages
French (fr)
Inventor
Jean-Claude Junqua
Original Assignee
Junqua Jean Claude
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Junqua Jean Claude filed Critical Junqua Jean Claude
Priority to JP5515034A priority Critical patent/JPH06507507A/en
Publication of WO1993017415A1 publication Critical patent/WO1993017415A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L25/87Detection of discrete points within a voice signal

Abstract

A method for analyzing a speech signal to isolate speech and nonspeech portions of the speech signal is provided. The method is applied to an input speech signal to determine boundary values locating isolated words or groups of words within the speech signal. First, a comparison signal is generated which is biased to emphasize components of the signal having preselected frequencies (20). Next, the system compares the comparison signal with a threshold level to determine estimated boundary values demonstrating the beginning and ending points of the words (22). Once the estimated boundary values are calculated, the system adjusts the boundary values to achieve final boundary values (26, 28). The specific amount of adjustment varies, depending upon the amount of noise present in the signal. The final pair of boundary values provide a reliable indication of the location and duration of the isolated word or group of words within the speech signal.

Description

METHOD FOR DEEΕKMINING BOUNDARIES OF ISOLATED WORDS
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to speech recognition systems and, in particular, to a system for determining the location of isolated words within a speech signal.
2. Description of Related Art
A wide variety of speech recognition systems have been developed. Typically, such systems receive a time- varying speech signal representative of spoken words and phrases. The speech recognition system attempts to determine the words and phrases within the speech signal by analyzing components of the speech signal. As a first step, most speech recognition systems must first isolate portions of the speech signal which convey spoken words from portions carrying silence. To this end, the systems attempt to determine the beginning and ending boundaries of a word or group of words within the speech signal. Accurate and reliable determination of the beginning and ending boundaries of words or sentences poses a challenging problem, particularly when the speech signal includes background noise. A variety of techniques have been developed for analyzing a time-varying speech signal to determine the location of an isolated word or group of words within the signal. Typically, the intensity of the speech signal is measured. Portions of the speech signal having an intensity greater than a minimum threshold are designated as being "speech," whereas those portions of the speech signal having an intensity below the threshold are designated as being silent portions or "nonspeech." Unfortunately, such simple discrimination techniques havebeenunreliable, particularly where substantial noise is present in the signal. Indeed, it has been estimated that more than half of the errors occurring in a typical speech recognition system are the result of an inaccurate determination of the location of the words within the speech signal. To minimize such errors, the technique for locating isolated words within the speech signal must be capable of reliably and accurately locating the boundaries of the words, despite a high noise level. Further, the technique must be sufficiently simple and quick to allow for real time processing of the speech signal. Furthermore, the technique must be capable of adapting to a variety of noise environments without any a priori knowledge of the noise. The ability to accurately and reliably locate the boundaries of isolated words in any of a variety of noise environments is generally referred to as the robustness of the technique. Heretofore, a robust technique for accurately locating words within a speech signal has not been developed.
OBJECTS AND SUMMARY OF THE INVENTION
In view of the foregoing, it can be appreciated that there is a need to develop an improved technique for locating isolated words or groups of words within a speech signal in any of a variety of noise environments. Accordingly, it is an object of the invention to provide such an improved technique for locating isolated words or groups of words within a speech signal; and
It is a further object of the invention to provide such a technique in a sufficiently simple form to allow for real time processing of a speefch signal.
These and other objects of the invention are achieved by a speech-detecting method wherein a comparison function representative, in part, of portions of a speech signal having frequencies within a preselected bandwidth are compared with a threshold value for determining the beginning and ending approximate boundaries of an isolated word or group of words within the speech signal.
In accordance with the preferred embodiment, the method comprises the steps of determining a constant threshold value representative of the level of the signal within regions of relative silence, determining a time- varying comparison signal representative, in part, of components of the speech signal having frequencies within a preselected frequency range, and comparing the comparison signal with the threshold value to determine crossover times when the comparison signal rises above the threshold or decreases below the threshold. A crossover time where the comparison signal rises from below the threshold to above the threshold is an indication of an approximate beginning boundary for a word. A crossover time wherein the comparison signal decreases from above the threshold to below the threshold is an indication of the ending boundary of a word. By determining the first beginning and last ending boundaries of an isolated word or group of words within the signal, the location of the isolated word or group of words within the signal is thereby determined.
The threshold value is calculated from the maximum value, Emaχ, of the root-mean-squared (RMS) energy contained within the speech signal, and determining an average value, E , for the RMS energy of the speech signal within the regions of relative silence. The threshold is given by the equation:
Ethreshdd " ( (E,πax " Eave) * Eβve 3) * A' ™here A is a preselected constant.
The comparison signal is generated by, first, dividing the speech signal into a set of individual time- varying signals, with each time-varying signal including only a portion of the overall speech signal. Next, the individual time-varying signals are separately processed to calculate a comparison value emphasizing frequencies of the individual signals within the preselected frequency range. To this end, each individual time-varying signal is converted to a frequency-varying signal by a Fourier transform. Once converted to a frequency-varying signal, the components of the individual signal having frequencies within the preselected frequency range are easily summed or integrated to yield a single intermediate comparison value. Since each individual signal of each time frame is processed separately, a plurality of intermediate comparison values are calculated, with the various intermediate comparison values together comprising the intermediate comparison signal. Preferably, the preselected frequency range includes frequencies between 250 and 3,500 Hz. Also, for each time frame, the logarithm of the
RMS energy of the individual signal within the time frame is computed and added to the intermediate comparison value to yield a final comparison function.
Once calculated, the comparison function is compared with the threshold value to determine whether it exceeds the threshold value. In this manner, crossover times, wherein the comparison signal crosses to above or below the threshold value, are determined. The first and last crossover times provide a first approximation for the beginning and ending boundaries of the isolated word or group of words within the speech signal.
The first approximation of the boundary end points are further processed to provide a more accurate, refined determination of the end points. To this end, the noise level of the speech signal is evaluated. If the evaluation reveals that the speech signal is noisy, typically less than or equal to 15 dB, then an adjustment value is calculated for use in adjusting the end points. The adjustment value is calculated from the equation: adjustment = B * Eave + C, wherein B and C are preselected constants.
The values of B and C are determined by the amount of noise present in the speech signal. The adjustment value is subtracted from the beginning boundary values to provide a final approximation of the beginning boundary values. Likewise, the adjustment value is added to the ending boundary values to yield a final approximation of the ending boundary value.
If the evaluation of the noise level indicates that the signal is not noisy, then an iterative adjustment technique is performed. First, a preselected value, such as 20 msec, is subtracted from the approximate beginning boundary value, and a second preselected value, such as 50 msec, is added to the approximate ending boundary value. Next, a second threshold value, Ethreshold2, is calculated from the equation:
Ethreshold2 = (Emax ~ Eave) /D + Eave* The logarithm of the RMS energy of the speech signal of the second approximated end points is compared with the second threshold value. If the logarithm of the RMS energy is greater than the second threshold, the steps of adding and subtracting the preselected adjustment values to the end points are again performed, thus yielding an updated approximation for the end points. Then, the logarithm of the RMS energy in the neighboring region of the new end points is checked against the second threshold value. This iterative process continues until the end points have been adjusted a sufficient amount to be reliably below the second threshold value. This iterative technique operates to reliably locate the boundaries of the words when the noise level is low.
The just-described iterative technique involving the calculation of the logarithm of the RMS energy may be supplemented with a similar calculation of the z*ero crossing rate of the speech signal such that the adjustment of the boundary values depends both on the RMS energy in the vicinity of the end points and the zero crossing rate in the vicinity of the end points.
In this manner, regardless of whether a high or low noise level exists within the speech signal, the boundary values of an isolated word or group of words within the speech signal are reliably located. Once the boundary values have been reliably determined, the location of the isolated word or group of words is therefore reliably determined. Processing of the words may then proceed to determine the content of the words or the sentence.
By generating a comparison signal emphasizing midrange frequencies, the location of the words is more reliably determined, despite a high noise level. By adjusting the boundary end points of the words in the manner described above, a more accurate and refined determination of the word boundaries is achieved. The frequency band of 250-3,500 Hz is preferably employed because desired components of speech occur within this frequency band. More specifically, the vowel portion of speech of a spoken word primarily occurs within this frequency range. To properly account for varying noise levels, the threshold against which the comparison signal is compared is adjusted according to the level of noise as measured in relatively silent portions of the speech signal. To further adapt to a variety of noise levels, the procedure whereby the beginning and ending boundaries of the words are adjusted likewise adapts to the ambient noise level.
BRIEF DESCRIPTION OF THE DRAWINGS The features of the present invention, which are believed to be novel, are set forth with particularity in the appended claims. The present invention, both as to its organization and manner of operation, together with further objects and advantages, may best be understood by reference to the following description, taken in connection with the accompanying drawings. Figure 1 is a block diagram of a speech recognition method incorporating a preferred embodiment of the present invention;
Figure 2 is a flow chart summarizing a method by which the boundaries of an isolated word or group of words within a speech signal are determined;
Figure 3 is a flow chart showing a method by which a comparison signal is generated for use in determining the boundary values of the isolated word or group of words within the speech signal; Figure 4 is a flow chart showing a method by which the comparison signal is compared with a threshold value to determine an initial estimate or approximation of the beginning and ending boundaries of words within the speech signal; Figure 5 is a graphic representation of a spectro¬ gram of a speech signal corresponding to the spoken word "one" and showing the comparison signal, as well as initial and final estimates of the beginning and ending boundaries of the word "one"; Figure 6 is a graphic representation of a spectro¬ gram of a speech signal incorporating the spoken word "one," showing the comparison signal, and showing initial and final estimates of the beginning and ending boundaries of the word "one," with the speech signal having a white-Gaussian noise with an SNR of approximately 15 dB; and
Figure 7 is a flow chart showing an iterative method whereby the initial estimates of the beginning and ending boundaries of words within the speech signal are adjusted when the noise level of the signal is low.
DESCRIPTION OF THE PREFERRED EMBODIMENTS The following description is provided to enable any person skilled in the art to make and use the invention and sets forth the best modes contemplated by the inventor of carrying out his invention. Various modifications, however, will remain readily apparent to those skilled in the art, since the generic principles of the present invention have been defined herein specifically to provide a method for reliably determining the beginning and ending boundaries of words within a speech signal in the presence of a wide variety of ambient noise levels.
Figure 1 provides an overview of a speech recognition system or method incorporating the present invention. The speech recognition system 10 includes a speech detection portion 12 which operates on an input time- varying speech signal S(t) to determine the location and duration of an isolated word or group of words carried within the speech signal. The speech detection portion operates to isolate a portion of the signal comprising "speech" from portions of the signal comprising relative silence or "nonspeech." Thus, if the speech signal includes a single word, the speech detector determines the beginning and ending boundaries of the word. If the speech signal includes a group of words, such as a complete sentence, the speech detector determines the beginning and ending boundaries of the entire sentence. Herein, a reference to the words of a speech signal is a reference to either a single isolated word or a group of words. Once the location of spoken words within the signal S(t) is determined, the' system converts the portion of the signal containing the located words to a set of frame-based feature vectors during an analysis phase 14. Such may be achieved by using a conventional perceptually- based linear prediction technique.
During a quantization phase 16, the system operates to associate the feature vectors with prerecorded vectors stored in a feature vector data base 17. During quantization phase 16, a root power sums weighting technique may be applied to the feature vectors to emphasize a portion of the speech spectrum. Quantization phase 16 may be implemented in accordance with(conventional feature vector quantization techniques. Finally, during a matching phase 18, the system operates to compare the associated feature vectors to Markov models 19 to decode the words. The Markov models may be initially generated and stored during a training phase wherein speech signals containing known words are processed.
Analysis phase 14, quantization phase 16, and matching phase 18 are postprocessing steps which will not be described further. The details of speech detection phase 12, wherein the location and duration of words within speech signal S(t) is determined, will now be described within reference to the remaining figures. An overview of the speech detection phase 12 is provided in Figure 2. Initially, at 20, the system operates on an input time-varying speech signal S(t) to compute a time-varying comparison signal F(t) . As will be described in greater detail, comparison signal F(t) is representative of the logarithm of the RMS energy of the signal biased to emphasize portions of the speech signal having frequencies in a selected frequency range.
Next, at 22, the system calculates a threshold value Ethreshold for comparison with comparison signal F(t) to determine the beginning and ending approximate boundaries of words within speech signal S(t). The boundary values are time values which indicate the approximate beginning of a spoken word or the approximate end of a spoken word within the time-varying input signal S(t). Thus, the words within speech signal S(t) have an associated beginning boundary value and an ending boundary value. Collectively, the boundary values are also herein referred to as "end points," regardless of whether they designate the beginning or ending boundaries of the words..
Once accurately determined, the end points designate the boundaries between silent portions of the speech signal and a spoken portion of the speech signal. Thus, by determining the boundary values, the spoken words of the signal can be isolated from the silent portions of the signal for further processing in accordance with the steps outlined in Figure 1. Further, the duration of the words within the speech signal is easily calculated by subtracting the time value of the beginning boundary value from the time value of the ending boundary value. An accurate measurement of the duration of the words is helpful in decoding the words.
Thus, at step 22, the system determines a pair of boundary end point values.. These values represent an initial approximation or estimation of the boundary values of words within the speech signal. Given the initial estimates, the system proceeds to adjust the boundary values in accordance with the level of noise present in the speech signal to determine more accurate boundary values. The noise level of the signal is estimated at step 23. The noise level may be calculated by estimating an average of the logarithm of the RMS energy of the signal in a portion of the signal known to represent silence. At step 24, the system determines whether the noise level of speech signal S(t) is high or low.
If the noise level is*high, the system proceeds to step 26 to perform a single adjustment of the boundary values in accordance with a method described in detail below. If the noise level is low, the system proceeds to step 28, where the system iteratively refines the boundary values in accordance with a method described below with reference to Figure 7.
As a result of the execution of either steps 26 or 28, the system possesses a pair of final boundary values representing accurate estimates of the actual boundaries between speech and nonspeech portions of signal S(t) .
The method by which the system generates comparison signal F(t) will now be described with reference to Figure 3.
Input speech signal S(t) is a time-varying signal having sound energy or intensity values as a function of time, such as the electrical signal output from a conven¬ tional microphone. Preferably, an analog-to-digital converter (not shown) operates on the input speech signal to convert a continuous analog input signal into a discreet signal comprised of thousands or millions of discreet energy or intensity values. Conversion to digital form allows the speech signal to be processed by a digital computer. However, the method of the invention can alternatively be implemented solely in analog form, with appropriate electrical circuits provided for manipulating and processing analog signals. If converted to a digital format, the signal preferably includes at least 100 discrete values per each 10 msec. Signal S(t) comprises a set of time frames, with each time frame covering 10 msec of the signal.
At step 30, signal S(t) is divided into a set of individual signals sn(t) , each representing a portion or window of the original signal.' The windows, which may be defined by a sliding Hamming window function, are separated by 100 msec and each includes 20 msec or two time frames. However, the duration, shape, and spacing of the windows are configurable parameters of the system which may be adjusted appropriately to achieve desired results.
Once divided into a set of individual signals defined by separate windows, the system, at 32, separately transforms each individual time-varying signal sn(t) from the time domain into the frequency domain. Transformation to the frequency domain is achieved by computing the Fourier transform by conventional means such as a fast Fourier transform (FFT) or the like.
Thus, at step 32, the system operates to convert the individual time-varying signals sn(t) into individual frequency-varying signals sn(ι>) . With each individual time- varying signal covering a time frame of 20 msec padded with zeros to obtain 256 discrete signal values, the resulting frequency domain signal includes 128 discrete values, the FFT producing only one frequency-domain value for every two time domain values. The discrete values of the frequency domain signals will vary from a frequency of approximately 0 upwards to perhaps 5,000 Hz or greater, depending upon the original input signal S(t) , the filtering down on the input signal before sampling, and the sampling rate.
At 34, the system operates to smooth the individual frequency domain signals using a conventional smoothing algorithm. Next, at 36, the system determines the total energy or intensity within each individual frequency- varying signal sn(t) within a preselected frequency bandwidth. Assuming that a frequency bandwidth of 250- 3,500 Hz is selected, the system merely integrates or sums all values of sn(υ) within the range 250-3,500 Hz, and ignores or discards all values of sn(υ) having frequencies outside this range. As can be appreciated, the conversion of the time-varying signals into frequency-varying signals using the fast Fourier transform greatly facilitates the calculation of the total energy or intensity within the preselected frequency range. For each individual frequency-varying signal s n(*>), the system, at step 36, thus calculates a single intermediate comparison value fn. For example, the first individual frequency-varying signal s.,(υ), corresponding to the first window of input signal S(t), yields a single comparison value of fr In general, the system computes a single comparison value fn(t) corresponding to each window n of input signal S(t) . The various individual comparison values fn, when taken together, comprise a first comparison function f(t) having discreet values arranged as a function of time.
At 38, the system normalizes first comparison function f(t) . While the system operates to calculate first comparison function f(t) , the system simultaneously computes a second comparison signal g(t) by executing steps 40 and 42. As shown in Figure 3, steps 40 and 42 can be executed simultaneously with steps 32-38. This may be achieved by utilizing a parallel processing architecture. Alterna¬ tively, steps 40 and 42 can be executed subsequent to steps 32-38. Regardless of the specific implementation, at step 40, the system operates to calculate the logarithm of the RMS energy or intensity of each individual time-varying signal sn(t) . Calculation of the logarithm of the RMS energy or intensity is achieved by conventional means such as by squaring each value within each time-varying signal, summing or integrating all such values within each signal and, finally, averaging and taking the square root of the result.
Thus, step 40 operates to calculate a set of values, each value representing the logarithm of the RMS energy for a single window of' input signal S(t). Thus, a set of discreet values gn are calculated with each value associated with a separate window centered on a separate time value. Taken together, all such values gn form a second comparison function g(t) . At step 42, the system operates to normalize comparison function g(t) .
At 44, the system sums comparison signals f(t) and g(t) to produce a single comparison function F(t) . At 46, the system smooths comparison function F(t) by a conven- tional smoothing algorithm. At 48, the system normalizes the smoothed comparison function F(t) .
The just-described steps shown in Figure 3 thus operate to process input signal S(t) to generate a comparison function F(t) representative of the logarithm of the RMS energy of the signal biased by components of the signal having frequencies within the preselected frequency range. With regard to step 30, it is not necessary for all individual signals to be calculated prior to processing of steps 32 and 40. In practice, the individual signals are generated sequentially, with each successive signal processed to yield values of fn and gn prior to sliding the Hamming window to yield a new individual signal.
Exemplary comparison signals F(t) are shown in Figures 5 and 6. In Figures 5 and 6, an input signal S(t) is designated by reference numerals 50 and 50', respec¬ tively. The corresponding comparison signal F(t) is represented by reference numerals 52 and 52', respectively. In Figure 5, input signal S(t) represents a spectrogram of the word "one." In Figure 6, input signal S(t) also represents the word "one." However, in Figure 6, input signal S(t) further includes white-Gaussian noise producing an SNR of approximately 15 dB. As can be seen from Figure 5, the comparison signal corresponds roughly to an outline of the input signal conveying the word "one." Thus, during an initial silent portion of signal S(t) , the comparison signal is at a minimum. Likewise, during an ending silent portion of signal S(t) , the comparison signal is also at a minimum. Also, as can be seen from Figure 5, the comparison signal does not perfectly match the boundaries of the spoken word. Rather, the comparison signal primarily represents that portion of the spoken word contained between the first and last vowels of the spoken word. To obtain a more reliable determination of the boundaries of the word, a refinement or adjustment feature, discussed in detail below, is performed.
In Figure 6, it can be seen that a comparison signal also generally matches the spoken word "one," despite the presence of considerable signal noise. Note, however, that the comparison signal is not as flat in the "silent" portions of the signal as that of Figure 5. This is the result of the added white-Gaussian noise. As will be described below, a separate refinement or adjustment procedure is performed to compensate for signals having a high noise level, such as that of Figure 6. Referring to Figure 4, the method by which the system analyzes the comparison function F(t) to determine initial and ending boundary values for words contained within the input speech signal is described.
At 60, the system computes the logarithm of the RMS energy for the entire input speech signal S(t) to produce a function E(t) varying in time. Computation of E(t) may be facilitated by retrieving the individually- calculated RMS energy functions calculated for each time window at step 40, shown in Figure 3. Regardless of the specific method of computation, the result of step 60 is a time-varying function, E(t) , covering the entire time span of the input signal S(t) .
At 62, the system determines the maximum value of E(t) . This value is designated Emaχ. At 64, the system determines the average of E(t) bver "silent" portions of the input signal. Preferably, Eave is an average over 10 frames of the input signals that are known to be "silent;" i.e., these frames do not include any spoken words, although they may include considerable noise. A simple method for producing "silent" frames for use in calculating Eave is to record at least 10 silent frames prior to recording an input signal.
Once Eroaχ and Eave are calculated, the system proceeds, at step 66, to compute a threshold level Ethreshold from the equation:
Ethxβahold ~
Figure imgf000018_0001
* -A ( *
Parameter A represents a constant which is a configurable parameter of the system, preferably determined by performing experiments on known input signals to determine an optimal value. A value of 2.9 has been found to be effective for use as the parameter A.
At 68, the system compares comparison function F(t) with Ethreshold to determine when the comparison function exceeds the threshold value. The first and last points where the comparison function crosses the threshold value, either by rising from below the threshold to exceed the threshold, or by dropping from above the threshold to below the threshold, represent approximate boundary values for words recorded within the signal. A single pair of approximate boundary values are thereby determined. If only one word is recorded within the signal, such as shown in Figures 5 and 6, then the pair of approximate boundary values indicate the approximate beginning and ending locations of the word. If a group of words are recorded within the input signal, then the pair of approximate boundary values indicate the approximate beginning and ending points of the group of words.
In Figures 5 and 6, exemplary approximate boundary values are indicated with dashed vertical lines and identified by reference numerals 70 and 72, with 70 representing a beginning word boundary and 72 representing an ending word boundary.
In certain applications, such as where extremely low noise signals are processed, these approximate boundary values may be sufficiently accurate to identify the locations of the words for subsequent processing of the individual words. However, in many cases, an adjustment or refinement of the approximate boundary values is necessary to more reliably locate the beginning and ending boundaries of words. Referring again to Figure 2, the system adjusts the approximate boundary values using one of the two methods, depending upon the noise level of the signal. The cutoff noise level evaluated by step 24 may be represented -~Y EaVe ~ 2 ' ° - Thus, if Eave is greater than 2.0, the system proceeds to step 26. If Eave is less than or equal to 2.0, then the system proceeds to step 28. An Eave of 2.0 roughly corresponds to an SNR of 15 dB.
If, at step 24, the system determines that the noise level of the signal is high or medium, the system proceeds to step 26, to make a single adjustment to the approximate boundary values. The single adjustment value or adjustment factor is subtracted from the approximate beginning word boundary and added to the approximate ending word boundary. The adjustment value is given by the following equation: Adjustment = B * Eavβ + C (2)
B and C are configurable parameters of the system which are selected to optimize the amount of adjustment. B and C may be derived experimentally by processing known inputs wherein the location and length of words are known prior to processing.
It has been found that the system operates most effectively when parameters B and C take on differing values depending upon the amount of noise present in the speech signal. Also, the value of B and C can be made to depend, in part, on a zero crossing rate which is representative of the rate at which speech signal S(t) passes from being positive to being negative. The zero crossing rate is a function of time, and may be represented by Z(t). An average zero crossing rate Zave is" calculated by averaging Z(t) over the entire signal. Further, B and C preferably take on different values for beginning or ending adjustment values.
Depending upon the specific values for Eave and Zave, the following values for B and C have been found to be effective.
If Eave is greater than 2.4, indicating a high noise level, and Zave is less than 5.0, then B = 3.0 and C = 8.0 for a beginning boundary value adjustment, and B = 7.0 and C = 8.0 for an ending boundary value adjustment. With these parameters, the resulting adjustment value is expressed in the number of time frames, rather than in a time value, such as seconds or milliseconds.
If Eave is greater than 2.4 and Zave is greater than or equal to 5.0, then B = 3.0 and C = 0.0 for the beginning boundary value adjustment, and B = 7.0 and C = 0.0 for the ending boundary value adjustment. If Eave is greater than 2.0 but less than or equal to 2.4, indicating a medium noise level, then the following three conditions apply:
If Zave is less than 5.0, then B = 7.5 and C = 8.0 for the beginning boundary value adjustment, and B = 11.7 and C = 8.0 for the ending boundary value adjustment.
If Zave is greater than or equal to 5.0 but less than 30.0, then B = 4.0 and C = 0.0 for the beginning boundary value adjustment, and B = 6.5 and C = 0.0 for the ending boundary value adjustment.
If Zave is greater than or equal to 30, then B = 7.5 and C = 0.0 for the beginning boundary value adjustment, and B = 11.7 and C = 0.0 for the ending boundary value adjustment. Thus, the system, at step 26, uses the just- described values for B and C to calculate adjustment values. The system then performs a single adjustment to the approxi¬ mate boundary values by subtracting the beginning boundary value adjustment from the beginning boundary and by adding the ending boundary value adjustment to the ending boundary. In Figure 6, the resulting final boundary values are indicated in vertical solid lines by re erence numerals 74' and 76', respectively. As can be seen from Figure 6, the final adjusted boundary values define a fairly broad time window in which the word may be reliably be found. The ending word boundary is generally extended a greater amount than the beginning boundary value to compensate for the fact that most words tend to begin sharply, yet end with a diminishing sound. The time window between the approximate beginning and ending boundaries may be referred to as an island of reliability. In Figure 5, that portion of signal S(t) occurring before the beginning boundary and after the ending boundary is simply discarded before subsequent processing, as those portions have been reliably determined to be silent portions of the signal. Although not shown in Figure 5, the input signal may include a group of words, perhaps forming a complete sentence. In such case, the final pair of boundary values will reliably locate the group of words. Thus, the adjustment values calculated in
Equation (2) are applied once to adjust the boundary values of signals having a high or medium noise level.
For signals with a low noise level, a more precise iterative adjustment process, identified by reference numeral 28 in Figure 2, is implemented. The iterative process is shown schematically in Figure 7. As can be seen from Figure 7, the beginning and ending boundaries are processed separately. Iterative adjustment of the beginning boundary value begins with step 80, whereas iterative adjustment of the ending boundary value begins at step 90.
At step 80, a preliminary adjustment value, preferably 20 msec, is subtr <acted from the beginning boundary value to determine a new approximate beginning boundary value. At step 82, the logarithm of the RMS energy of the new beginning boundary value is examined to determine whether it exceeds a second, more refined, threshold value E J.threshold2* Ethreshoid2 is given bY the equation:
E threshold- ~ ^E ax. ~ Ea.vβi /D + Eavβ ( 3 )
The parameter D is a constant value which is a configurable parameter of the system and may be derived experimentally by processing known input signals. It has been found that a value of 3.0 has been effective for use as constant D. If the logarithm of the RMS energy is found to exceed Ethreshold2 within the time frame containing the new boundary value, then the new approximate boundary value is updated. Comparison of the logarithm of the RMS energy at the time frame of the new boundary value is performed at step 84.
If the logarithm of the RMS energy is found to exceed Ethreshold2 at step 84, then the system returns to step 80 to update the boundary value again. If, at step 84, the system determines the logarithm of the RMS energy of the new beginning boundary value is below Ethreshold2, then execution proceeds to step 86, where the system performs a second test against Ethreshold2, involving only time frames immediately prior to the new boundary value.
At 86, the system calculates the logarithm of the RMS energy for 10 time frames immediately prior to the new beginning boundary value. If, at step 87, the average of the logarithm of RMS energy within the 10 time frames preceding the new beginning boundary exceeds Ethreshold2, then the system returns to step 80 to adjust the boundary value again. The 10 time frames is also a configurable parameter which may be adjusted to achieve optimal results.
At step 88, the system calculates an average of the zero crossing rate for 10 time frames before the beginning boundary value. If, at step 89, the average of the zero crossing rate for those 10 time frames is greater than a zero crossing rate threshold, then the system again returns to step 80 to further iterate the beginning boundary value. The zero crossing rate threshold is given by 1.3 times the average of the zero crossing rate Zave.
Thus, a total of three tests are performed on the beginning boundary value to determine whether it reliably demarcates a beginning boundary of the word. If any of the three above-described tests fail, the system returns to step 80 to subtract an additional adjustment value from the beginning boundary value to further refine that boundary value. The new adjustment value or "progression step" is set to 20 msec. Iterative adjustment continues until either a boundary value is determined which passes all three tests or an iteration limit is reached. This iteration limit is set to 100 msec for the beginning boundary. Tius, the beginning boundary value will not be advanced more than 100 msec. Hence, the iteration is bounded. Only when a final boundary value is achieved which passes all three tests or the iteration limit is reached does the system exit the loop of steps 80-89 of Figure 7 to proceed to the analysis, quantization, and matching phases summarized with reference to Figure 1. Simultaneously, whilethe beginningboundaryvalue is iteratively updated, the system operates to iteratively update the ending boundary value. The operations performed on the ending boundary value are similar to that of the beginning boundary value and will only be summarized. At step 90, the system sets a new ending boundary value by adding 50 msec to the ending boundary. Next, at step 92, the system determines the logarithm of the RMS at the time frame of the new ending boundary value. At step 94, the system compares the logarithm of the RMS energy to Ethreshold2 and returns to execution step 90 if this value exceeds Ethreshold2. If the logarithm of the RMS energy does not exceed Ethreshold2, the system proceeds to perform two more tests, identified by reference numerals 96-99, involving the logarithm of the RMS energy for 10 time frames and the average zero crossing rate for those 10 time frames. More specifically, the system calculates the logarithm of the RMS energy, at step 96, for the 10 time frames immediately subsequent to the new ending boundary value to determine whether it exceeds Ethreshold2 as given by Equation (3) . If, at step 97, this value exceeds Ethreshold2, then execution continues at step 90. If not, the system calculates the average of the zero crossing rate for those 10 time frames. If this value exceeds a zero crossing rate threshold equal to four times the average zero crossing rate for the time frames, then execution also returns to step 90 for further processing. An adjustment value or "progression step" of 50 msec continues to be used for the ending boundary value. As with the adjustment of the beginning boundary value, the adjustment of the ending boundary value is bounded. Iteration will not proceed beyond 150 msec.
Only after the ending boundary value is adjusted by a sufficient amount to pass all three above-described tests or the iteration limit of 150 msec is reached will the system proceed to the analysis, quantization, and matching phases summarized with reference to Figure 1. As shown in Figure 7, iterative processing of the beginning and ending boundary values may occur in parallel. Alternatively, the iteration of the ending boundary value may be performed subsequent to iterative of the beginning boundary value. Other specific parallel or sequential implementations are available, depending upon the hardware of the system.
To briefly summarize, the system processes an input speech signal to determine the boundary values reliably demarcating words within the speech signal. First, the system divides the input signal into a set of time windows and calculates comparison values for each time window, representative, in part, of frequency components of the signal within the time frames, to produce a comparison function which varies with time. Next, the system compares the comparison function with a threshold value to determine approximate boundary values. The approximate boundary values represent the first and last crossover points where the comparison function crosses the threshold value, either by rising from below the threshold to above the threshold, or by dropping from above the threshold to below the threshold. Once the approximate boundary values are calculated, the system adjusts the boundary values to achieve final boundary values. The specific amount of adjustment varies, depending upon the noise level of the signal. If a high or medium noise level exists, then a single adjustment occurs. The single adjustment amount varies according to the specific noise level. If a low noise level exists, then a more refined iterative adjustment is performed. The beginning and ending boundary values are first adjusted to new beginning and ending boundary values. Then, these new values are tested against various threshold values. If any of a number of tests fail, then iteration continues and the beginning and ending boundary values are adjusted by a greater amount. Only after the updated boundary values pass all tests or a boundary limit is reached will the system proceed to analyze the content of the words found between the boundary values.
Those skilled in the art will appreciate that various adaptations and modifications of the just-described preferred embodiment can be configured without departing from the scope and spirit of the invention. Therefore, it is to be understood that, within the scope of the appended claims, the invention may be practiced other than as specifically described herein.

Claims

CLAIMSWhat is Claimed Is:
1. A method for determining boundaries of words carried within a time-varying 'speech signal, said signal being representative of words separated by regions of relative silence, said method comprising the steps of: determining a constant threshold value representative of the average of said signal within said regions of relative silence; determining a time-varying comparison signal representative of said signal biased to emphasize components of said signal having frequencies within a preselected frequency band; comparing said time-varying comparison signal with said constant threshold value to determine first and last crossover times when said comparison signal crosses said threshold, said times representing boundaries of said words for reliably locating the beginning and ending of said words within the speech signal.
2. The method of Claim 1, wherein said step of determining a time-varying comparison signal comprises the steps of: determining a time-varying signal represen- tative of the logarithm of a root-mean-square
(RMS) energy of said speech signal; determining a time-varying signal repre¬ sentative of components of said signal having frequencies in said preselected frequency band; and adding said time-varying signal representa¬ tive the logarithm of the RMS energy within said speech signal to said time-varying signal repre¬ sentative of components of said signal having frequencies within said preselected frequency band.
3. The method of Claim 1, wherein said step of determining a time-varying comparison signal comprises the steps of: dividing said speech signal into a plurality of successive individual signals; and determining a value representative of a total energy carried within said preselected frequency band in each individual signal, each individual signal yielding a single comparison value with all of said comparison values together forming said comparison signal.
4. The method of Claim 3, wherein said step of determining the value representative of the total energy carried within said preselected frequency band within each individual signal comprises the steps of: performing a Fourier transform on each of said individual signals for converting said individual signals into frequency-varying signals; selecting components of said frequency- varying signal having frequencies within said preselected frequency band; and summing all selected components.
5. The method of Claim 1, wherein said preselected frequency band extends from 250-3500 Hz.
6. The method of Claim 1, wherein said step of determining said threshold value comprises the steps of: determining the maximum value, Emaχ, of an RMS energy of the speech signal; determining an average value, Eave, of an RMS energy of said regions of relative silence; and calculating said threshold from the equation: threshold = ( (Emax ~ Eave> * Eave3 ) * A ' Whβre A ±S a preselected constant.
7. The method of Claim 6, wherein said constant A is approximately 2.9.
8. The method Claim 1, wherein said step of comparing said time-varying comparison signal to said constant threshold value further comprises the steps of: determining an approximate beginning word boundary time by determining when said comparison signal first rises from below said threshold to above said threshold; and determining an approximate ending word boundary time by determining when said comparison signal last drops from above said threshold to below said threshold.
9. The method of Claim 8, comprising the additional steps of adjusting the approximate word boundary times by: determining an average level of noise in the speech signal; determining first and second adjustment values based on said average level of noise; adding said first adjustment value to said ending word boundary time; and subtracting said second adjustment value from said beginning word boundary time.
10. The method of Claim 8, comprising the additional steps of iteratively adjusting the approximate word boundary times by: adding a first adjustment value to the ending boundary time to obtain a value for the ending boundary time; subtracting a second adjustment value from the beginning boundary time to obtain a new value for the beginning boundary time; comparing values representative of the signal level at the adjusted boundary time to a second threshold value; "and repeating said steps of adding a first adjustment value to said ending boundary time and subtracting a second adjustment value to said beginning boundary time if said second threshold value continues to exceed said values representa¬ tive of the signal level of the adjusted boundary times.
11. The method of Claim 10, wherein said first adjustment value is initially 50 msec and said second adjustment value is initially 20 msec.
12. The method of Claim 10, wherein said values representative of said signal level are representative of the logarithm of an RMS energy of the signal and repre¬ sentative of a zero crossing rate of the signal.
13. The method of Claim 8, comprising the additional steps of: adjusting the approximate word boundary times by: determining an average level of noise in the speech signal; if the average level of noise in the signal exceeds a predetermined noise level, performing the steps of: adding a first adjustment value to said ending word boundary time; and subtracting a second adjustment value from said beginning word boundary time; if the average level of noise in the signal does not exceed the predetermined noise level, performing the steps of: iteratively adjusting the approximate word boundary times by: adding a third adjustment value to the ending boundary time to obtain a new value for the ending boundary time; subtracting a fourth adjustment value from the beginning boundary time to obtain a new value for the beginning boundary time; comparing values representative of said signal at said new boundary times to a second threshold value; and repeating said steps of adding a third adjustment value to said ending boundary time and subtracting a fourth adjustment value to said beginning boundary time if said second threshold exceeds said values representative of said signal of said new boundary times.
14. The method of Claim 13, wherein said predetermined noise level approximately corresponds to an SNR of 15 dB.
15. The method of Claim 1, wherein said crossover times represent approximate beginning and ending boundary times, and wherein maximum boundary times are derived from said approximate boundary times, with ranges of time between the maximum boundary times and the approximate boundary times representing ranges of time values in which the actual beginning and ending boundaries of words may be reliably found.
16. A method for determining beginning and ending boundaries of words carried within a time-varying speech signal, said signal being representative of a plurality of words separated by regions of relative silence, said method comprising the steps of: determining a threshold value representative of the average of said signal within regions of relative silence; determining a time-varying comparison signal representative of said signal biased to emphasize components of said signal having frequencies within a preselected frequency band; comparing said time-varying comparison function to said threshold value to determine times when said signal crosses said threshold, said times being an indication of approximate boundary times of said words within said signal; and adjusting said approximate boundary times by applying adjustment values, said adjustment values varying according to the level of noise in said signal.
17. A method for determining beginning and ending boundaries of words carried within a speech signal comprised of energy values varying with time, said signal being representative of words separated by regions of relative silence, said signal having a zero crossing rate representa¬ tive of the rate at which the 'energy values of the signal pass through a zero energy level, said signal including an initial period of relative silence, said method comprising the steps of: dividing said speech signal into a plurality of time windows, each time window having a plurality of sequential energy values; determining a discrete threshold value representative of an average energy for energy values occurring in said initial region of relative silence; for each time window, determining a parameter representative of the total energy of said signal within the window biased to emphasize components of the signal having frequencies within a preselected frequency band to provide a comparison function comprising a plurality of said parameters varying as a function of time; comparing said comparison function with said threshold value to determine time values when said comparison function crosses said threshold, said time values being an indication of the boundaries of words within said signal; and adjusting said time values by applying an adjustment factor representative of the noise level of said signal and representative of the zero crossing rate of said signal.
PCT/US1993/001611 1992-02-28 1993-02-24 Method for determining boundaries of isolated words WO1993017415A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP5515034A JPH06507507A (en) 1992-02-28 1993-02-24 Method for determining independent word boundaries in audio signals

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/843,013 1992-02-28
US07/843,013 US5305422A (en) 1992-02-28 1992-02-28 Method for determining boundaries of isolated words within a speech signal

Publications (1)

Publication Number Publication Date
WO1993017415A1 true WO1993017415A1 (en) 1993-09-02

Family

ID=25288834

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1993/001611 WO1993017415A1 (en) 1992-02-28 1993-02-24 Method for determining boundaries of isolated words

Country Status (3)

Country Link
US (1) US5305422A (en)
JP (1) JPH06507507A (en)
WO (1) WO1993017415A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0859353A2 (en) * 1997-02-13 1998-08-19 Siemens Business Communication Systems, Inc. Signal processing method and system utilizing logical speech boundaries
EP0945854A2 (en) * 1998-03-24 1999-09-29 Matsushita Electric Industrial Co., Ltd. Speech detection system for noisy conditions
CN111429927A (en) * 2020-03-11 2020-07-17 云知声智能科技股份有限公司 Method for improving personalized synthesized voice quality
US11145305B2 (en) 2018-12-18 2021-10-12 Yandex Europe Ag Methods of and electronic devices for identifying an end-of-utterance moment in a digital audio signal

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5579431A (en) * 1992-10-05 1996-11-26 Panasonic Technologies, Inc. Speech detection in presence of noise by determining variance over time of frequency band limited energy
US5617508A (en) * 1992-10-05 1997-04-01 Panasonic Technologies Inc. Speech detection device for the detection of speech end points based on variance of frequency band limited energy
DE4306508A1 (en) * 1993-03-03 1994-09-08 Philips Patentverwaltung Method and arrangement for determining words in a speech signal
US6471420B1 (en) 1994-05-13 2002-10-29 Matsushita Electric Industrial Co., Ltd. Voice selection apparatus voice response apparatus, and game apparatus using word tables from which selected words are output as voice selections
DE4422545A1 (en) * 1994-06-28 1996-01-04 Sel Alcatel Ag Start / end point detection for word recognition
US5594834A (en) * 1994-09-30 1997-01-14 Motorola, Inc. Method and system for recognizing a boundary between sounds in continuous speech
US5596679A (en) * 1994-10-26 1997-01-21 Motorola, Inc. Method and system for identifying spoken sounds in continuous speech by comparing classifier outputs
US5638486A (en) * 1994-10-26 1997-06-10 Motorola, Inc. Method and system for continuous speech recognition using voting techniques
US5638487A (en) * 1994-12-30 1997-06-10 Purespeech, Inc. Automatic speech recognition
US5712953A (en) * 1995-06-28 1998-01-27 Electronic Data Systems Corporation System and method for classification of audio or audio/video signals based on musical content
KR100363251B1 (en) * 1996-10-31 2003-01-24 삼성전자 주식회사 Method of judging end point of voice
JP3625002B2 (en) * 1996-12-26 2005-03-02 株式会社リコー Voice recognition device
JP3578587B2 (en) * 1997-03-28 2004-10-20 株式会社リコー Voice recognition device and voice recognition method
US5995924A (en) * 1997-05-05 1999-11-30 U.S. West, Inc. Computer-based method and apparatus for classifying statement types based on intonation analysis
US6370504B1 (en) * 1997-05-29 2002-04-09 University Of Washington Speech recognition on MPEG/Audio encoded files
US6216103B1 (en) * 1997-10-20 2001-04-10 Sony Corporation Method for implementing a speech recognition system to determine speech endpoints during conditions with background noise
US6003004A (en) * 1998-01-08 1999-12-14 Advanced Recognition Technologies, Inc. Speech recognition method and system using compressed speech data
US5970447A (en) * 1998-01-20 1999-10-19 Advanced Micro Devices, Inc. Detection of tonal signals
US6711536B2 (en) * 1998-10-20 2004-03-23 Canon Kabushiki Kaisha Speech processing apparatus and method
DE19854341A1 (en) * 1998-11-25 2000-06-08 Alcatel Sa Method and circuit arrangement for speech level measurement in a speech signal processing system
US6324509B1 (en) * 1999-02-08 2001-11-27 Qualcomm Incorporated Method and apparatus for accurate endpointing of speech in the presence of noise
US7277853B1 (en) * 2001-03-02 2007-10-02 Mindspeed Technologies, Inc. System and method for a endpoint detection of speech for improved speech recognition in noisy environments
US7222071B2 (en) * 2002-09-27 2007-05-22 Arbitron Inc. Audio data receipt/exposure measurement with code monitoring and signature extraction
KR100491753B1 (en) * 2002-10-10 2005-05-27 서울통신기술 주식회사 Method for detecting voice signals in voice processor
US20050015244A1 (en) * 2003-07-14 2005-01-20 Hideki Kitao Speech section detection apparatus
US8311819B2 (en) * 2005-06-15 2012-11-13 Qnx Software Systems Limited System for detecting speech with background voice estimates and noise estimates
US8170875B2 (en) * 2005-06-15 2012-05-01 Qnx Software Systems Limited Speech end-pointer
US8069039B2 (en) * 2006-12-25 2011-11-29 Yamaha Corporation Sound signal processing apparatus and program
WO2011070972A1 (en) * 2009-12-10 2011-06-16 日本電気株式会社 Voice recognition system, voice recognition method and voice recognition program
CN106920543B (en) * 2015-12-25 2019-09-06 展讯通信(上海)有限公司 Audio recognition method and device
JP6729635B2 (en) * 2017-12-25 2020-07-22 カシオ計算機株式会社 Voice recognition device, robot, voice recognition method, and recording medium
US10910001B2 (en) 2017-12-25 2021-02-02 Casio Computer Co., Ltd. Voice recognition device, robot, voice recognition method, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4700394A (en) * 1982-11-23 1987-10-13 U.S. Philips Corporation Method of recognizing speech pauses
US4700392A (en) * 1983-08-26 1987-10-13 Nec Corporation Speech signal detector having adaptive threshold values
US4821325A (en) * 1984-11-08 1989-04-11 American Telephone And Telegraph Company, At&T Bell Laboratories Endpoint detector
US4829578A (en) * 1986-10-02 1989-05-09 Dragon Systems, Inc. Speech detection and recognition apparatus for use with background noise of varying levels

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IT1044353B (en) * 1975-07-03 1980-03-20 Telettra Lab Telefon METHOD AND DEVICE FOR RECOVERY KNOWLEDGE OF THE PRESENCE E. OR ABSENCE OF USEFUL SIGNAL SPOKEN WORD ON PHONE LINES PHONE CHANNELS
JPS5925240B2 (en) * 1980-12-10 1984-06-15 松下電器産業株式会社 Word beginning detection method for speech sections
JPS5797599A (en) * 1980-12-10 1982-06-17 Matsushita Electric Ind Co Ltd System of detecting final end of each voice section
JPS57158699A (en) * 1981-03-25 1982-09-30 Oki Electric Ind Co Ltd Recognition starting point specification for voice typewriter
JPS57171400A (en) * 1981-04-14 1982-10-21 Sanyo Electric Co Detector for sound region
DE3243232A1 (en) * 1982-11-23 1984-05-24 Philips Kommunikations Industrie AG, 8500 Nürnberg METHOD FOR DETECTING VOICE BREAKS
JPS60205600A (en) * 1984-03-30 1985-10-17 株式会社東芝 Voice recognition equipment
JPS60260096A (en) * 1984-06-06 1985-12-23 富士通株式会社 Correction system for voice section detecting threshold in voice recognition
JPS62204300A (en) * 1986-03-05 1987-09-08 日本無線株式会社 Voice switch
JP3125928B2 (en) * 1989-02-03 2001-01-22 株式会社リコー Voice recognition device
JP2701431B2 (en) * 1989-03-06 1998-01-21 株式会社デンソー Voice recognition device
JP2992324B2 (en) * 1990-10-26 1999-12-20 株式会社リコー Voice section detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4700394A (en) * 1982-11-23 1987-10-13 U.S. Philips Corporation Method of recognizing speech pauses
US4700392A (en) * 1983-08-26 1987-10-13 Nec Corporation Speech signal detector having adaptive threshold values
US4821325A (en) * 1984-11-08 1989-04-11 American Telephone And Telegraph Company, At&T Bell Laboratories Endpoint detector
US4829578A (en) * 1986-10-02 1989-05-09 Dragon Systems, Inc. Speech detection and recognition apparatus for use with background noise of varying levels

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0859353A2 (en) * 1997-02-13 1998-08-19 Siemens Business Communication Systems, Inc. Signal processing method and system utilizing logical speech boundaries
EP0859353A3 (en) * 1997-02-13 1999-03-03 Siemens Business Communication Systems, Inc. Signal processing method and system utilizing logical speech boundaries
US6167374A (en) * 1997-02-13 2000-12-26 Siemens Information And Communication Networks, Inc. Signal processing method and system utilizing logical speech boundaries
EP0945854A2 (en) * 1998-03-24 1999-09-29 Matsushita Electric Industrial Co., Ltd. Speech detection system for noisy conditions
EP0945854A3 (en) * 1998-03-24 1999-12-29 Matsushita Electric Industrial Co., Ltd. Speech detection system for noisy conditions
US11145305B2 (en) 2018-12-18 2021-10-12 Yandex Europe Ag Methods of and electronic devices for identifying an end-of-utterance moment in a digital audio signal
CN111429927A (en) * 2020-03-11 2020-07-17 云知声智能科技股份有限公司 Method for improving personalized synthesized voice quality

Also Published As

Publication number Publication date
JPH06507507A (en) 1994-08-25
US5305422A (en) 1994-04-19

Similar Documents

Publication Publication Date Title
US5305422A (en) Method for determining boundaries of isolated words within a speech signal
US6216103B1 (en) Method for implementing a speech recognition system to determine speech endpoints during conditions with background noise
Ross et al. Average magnitude difference function pitch extractor
US6314396B1 (en) Automatic gain control in a speech recognition system
US4038503A (en) Speech recognition apparatus
US8831942B1 (en) System and method for pitch based gender identification with suspicious speaker detection
KR100713366B1 (en) Pitch information extracting method of audio signal using morphology and the apparatus therefor
JP2002516420A (en) Voice coder
US6718302B1 (en) Method for utilizing validity constraints in a speech endpoint detector
EP1511007B1 (en) Vocal tract resonance tracking using a target-guided constraint
Friedman Pseudo-maximum-likelihood speech pitch extraction
KR100827097B1 (en) Method for determining variable length of frame for preprocessing of a speech signal and method and apparatus for preprocessing a speech signal using the same
US7966179B2 (en) Method and apparatus for detecting voice region
US6470311B1 (en) Method and apparatus for determining pitch synchronous frames
JP4217616B2 (en) Two-stage pitch judgment method and apparatus
JP3354252B2 (en) Voice recognition device
WO2018138543A1 (en) Probabilistic method for fundamental frequency estimation
KR20050050533A (en) Method and apparatus for continuous valued vocal tract resonance tracking using piecewise linear approximations
KR0136608B1 (en) Phoneme recognizing device for voice signal status detection
KR100194953B1 (en) Pitch detection method by frame in voiced sound section
GB2216320A (en) Selective addition of noise to templates employed in automatic speech recognition systems
JP3892379B2 (en) Harmonic structure section estimation method and apparatus, harmonic structure section estimation program and recording medium recording the program, harmonic structure section estimation threshold determination method and apparatus, harmonic structure section estimation threshold determination program and program Recording media
JP2898637B2 (en) Audio signal analysis method
CA1180813A (en) Speech recognition apparatus
JP2583854B2 (en) Voiced / unvoiced judgment method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP