CA2342353C - An adaptive criterion for speech coding - Google Patents

An adaptive criterion for speech coding Download PDF

Info

Publication number
CA2342353C
CA2342353C CA002342353A CA2342353A CA2342353C CA 2342353 C CA2342353 C CA 2342353C CA 002342353 A CA002342353 A CA 002342353A CA 2342353 A CA2342353 A CA 2342353A CA 2342353 C CA2342353 C CA 2342353C
Authority
CA
Canada
Prior art keywords
balance factor
speech signal
factor
original speech
voicing level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CA002342353A
Other languages
French (fr)
Other versions
CA2342353A1 (en
Inventor
Erik Ekudden
Roar Hagen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=22510960&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=CA2342353(C) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Publication of CA2342353A1 publication Critical patent/CA2342353A1/en
Application granted granted Critical
Publication of CA2342353C publication Critical patent/CA2342353C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/083Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being an excitation gain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L2019/0001Codebooks
    • G10L2019/0003Backward prediction of gain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/93Discriminating between voiced and unvoiced parts of speech signals
    • G10L2025/935Mixed voiced class; Transitions

Abstract

In producing from an original speech signal a plurality of parameters (gaQ,gfQ) from which an approximation of the orginal speech signal can be reconstructed, a further signal is generated in response to the original speech signal, which further signal is intended to represent the original speech signal. At least one of the parameters is determined (69, 71) using first and second differences between the original speech signal and the further signal. The first difference is a difference between a waveform associated with the original speech signal and a waveform associated with the further signal, and the second difference is a difference between an energy parameter derived from the original speech signal and a corresponding energy parameter associated with the further signal.

Description

AN ADAPTIVE CRITERION FOR SPEECH CODING

FIELD OF THE INVENTION

The invention relates generally to speech coding and, more particularly, to improved coding criteria for accommodating noise-like signals at lowered bit rates.
BACKGROUND OF THE INVENTION
Most modern speech coders are based on some form of model for generation ofthe coded speech signal. The parameters and signals ofthe model are quantized and information describing them is transmitted on the channel. The dominant coder model in cellular telephony applications is the Code Excited Linear Prediction (CELP) technology.
A conventional CELP decoder is depicted in Figure 1. The coded speech is generated by an excitation signal fed through an all-pole synthesis filter with a typical order of 10. The excitation signal is formed as a sum of two signals ca and cf, which are picked from respective code.books (one fixed and one adaptive) and subsequently multiplied by suitable gain factors ga and gf. The codebook signals are typically of length 5 ms (a subframe) whereas the synthesis filter is typically updated every 20 ms (a frame). The parameters associated with the CELP model are the synthesis filter coefficients, the codebook entries and the gain factors. -In Figure 2, a conventional CELP encoder is depicted. A replica of the CELP
decoder (FIGURE 1) is used to generate candidate coded signals for each subframe.
The coded signal is compared to the uncoded (digitized) signal at 21 and a weighted error signal is used to control the encoding process. The synthesis filter is determined using linear prediction (LP). This conventional encoding procedure is referred to as linear prediction analysis-by synthesis (LPAS).
As understood from the description above, LPAS coders employ waveform matching in a weighted speech domain, i.e., the error signal is filtered with a weighting filter. This can be expressed as minimizing the following squared error criterion:
-2-Dw - I I Sw-CSw I 12- 1 1 W'S-W-H=(ga=ca+gf cf) 1 12 (Eq. 1) where S is the vector containing one subframe of uncoded speech samples, S W
represents S multiplied by the weighting filter W, ca and cf are the code vectors from the adaptive and fixed codebooks respectively, Wis a matrix performing the weighting filter operation, H is a matrix performing the synthesis filter operation, and CSN, is the coded signal multiplied by the weighting filter W. Conventionally, the encoding operation for minimizing the criterion of Equation I is performed according to the following steps:

Step 1. Compute the synthesis filter by linear prediction and quantize the filter coefficients. The weighting filter is computed from the linear prediction filter coefficients.

Step 2. The code vector ca is found by searching the adaptive codebook to minimize DW of Equation 1 assuming that gf is zero and that ga is equal to the optimal value. Because each code vector ca has conventionally associated therewith an optimal value of ga, the search is done by inserting each code vector ca into Equation 1 along with its associated optimal ga value.

Step 3. The code vector cf is found by searching the fixed codebook to minimize Dw, using the code vector ca and gain ga found in step 2.
The fixed gain gf is assumed equal to the optimal value.
Step 4. The gain factors ga and gf are quantized. Note that ga can be quantized after step 2 if scalar quantizers are used.

The waveform matching procedure described above is known to work well, at least for bit rates of say 8 kb/s or more. However, when lowering the bit rate, the ability to do waveform matching of non-periodic, noise-like signals such as unvoiced
-3-speech and background noise suffers. For voiced speech segments, the waveform matching criterion still performs well, but the poor waveform matching ability for noise-like signals leads to a coded signal with an often too low level and an annoying varying character (known as swirling).

For noise-like signals, it is well known in the art that it is better to match the spectral character of the signal and have a good signal level (gain) matching.
Since the linear prediction synthesis filter provides the spectral character of the signal, an alternative criterion to Equation 1 above can be used for noise-like signals:

DE= (~ FCS) 2 (Eq= 2) where ES is the energy of the uncoded speech signal and Ecs is the energy of the coded signal CS=H-(ga ca+g fcj). Equation 2 implies energy matching as opposed to waveform matching in Equation 1. This criterion can also be used in the weighted speech domain by including the weighting filter W. Note that the square root operations are included in Equation 2 only to have a criterion in the same domain as Equation 1; this is not necessary and is not a restriction. There are also other possible energy-matching criteria such as DE=I Es-Ec, 1.
The criterion can also be formulated in the residual domain as follows:

DE= rfx) 2 (Eq. 3) where E, is the energy of the residual signal r obtained by filtering S
through the inverse (H ") of the synthesis filter, and Ez is the energy of the excitation signal given by x=ga ca+gf cf.
The different criteria above have been employed in conventional multi-mode coding where different coding modes (e.g., energy matching) have been used for unvoiced speech and background noise. In these modes, energy matching criteria as Substitute sheet
-4-in Equations 2 and 3 have been used, A drawback with this approach is the need for mode decision, for exampie, choosing waveform matching mode (Equation 1) for voiced speech and choosing energy matching mode (Equations 2 or 3) for noise-like signals like unvoiced speecb and background noise. The mode decision is sensitive and causes annoying artifacts when wrong. Also, the drastic change ofcoding strategy between modes can cause unwanted sounds.

Substitute Sheet 4a SUMMARY OF THE INVENTION

It is therefore desirable to provide improved coding of noise-like signals at lowered bit rates without the aforementioned disadvantages of multi-mode coding.

According to an aspect of the invention, there is provided a method for producing a plurality of parameters from an original speech signal in a speech encoder, wherein an approximation of the original speech signal can be reconstructed from the plurality of parameters, the method comprising the steps of:
generating in response to the original speech signal a coded signal representing the original speech signal;
determining a first difference between a waveform associated with the original speech signal and a waveform associated with the coded signal;
determining a second difference between an energy parameter derived from the original speech signal and a corresponding energy parameter associated with the coded signal; and determining at least one of the plurality of parameters from a combination of the first and second differences.

According to another aspect of the invention, there is provided a speech encoding apparatus, comprising:
an input for receiving an original speech signal;
an output for providing information indicative of a plurality of parameters from which an approximation of the original speech signal can be reconstructed; and a controller coupled between said input and said output for providing a coded signal in response to the original speech signal, said coded signal representing the original speech signal, said controller further being for determining at least one of said plurality of parameters from a combination of a first and a second difference between the original speech signal and the coded signal, wherein said first difference is a difference between a waveform associated with the original speech signal and a waveform associated with the coded signal, and wherein the second difference is a difference between an energy parameter derived from the original speech signal and a corresponding energy parameter associated with the coded signal.

Substitute Sheet 4b According to another aspect of the invention, there is provided a transceiver apparatus for use in a communication system, comprising:
an input for receiving a user input stimulus;
an output for providing an output signal to a communication channel for transniission to a receiver via the communication channel; and a speech encoding apparatus having an input coupled to said transceiver input and having an output coupled to said transceiver output, said input of said speech encoding apparatus being for receiving an original speech signal from said transceiver input, said output of said speech encoding apparatus being for providing to said transceiver output information indicative of a plurality of parameters from which an approximation of the original speech signal can be reconstructed at the receiver, said speech encoding apparatus comprising a controller coupled between said input and said output thereof for providing in response to the original speech signal a coded signal representing the original speech signal, said controller further being for determining at least one of said plurality of parameters from a combination of a first and a second difference between the original speech signal and the coded signal, wherein said first difference is a difference between a waveform associated with the original speech signal and a waveform associated with the coded signal,-and ~vher~ t~r:e second difiereH:e-:s a--difference ... ...... ...... .. ....
between an energy parameter derived from the original speech signal and a corresponding energy parameter associated with the coded signal.

According to another aspect of the invention, there is provided a method for producing a plurality of parameters from an original speech signal in a speech encoder, wherein an approximation of the original speech signal can be reconstructed from the plurality of parameters, the plurality of parameters comprising at least a code vector from an adaptive codebook, a code vector from a fixed codebook, a quantized adaptive gain and a quantized fixed gain that are associated with a coded signal representing the original speech signal, the method comprising the steps of:
generating, in response to the original speech signal, the coded signal by selecting the code vector from the adaptive codebook, the code vector from the fixed codebook and the quantized adaptive gain; and determining the quantized fixed gain from a set of quantized fixed gain values by minimizing a criterion defined by:

Substitute Sheet 4c DgtQ l-a}= IcflZ= (gfoP-r - gf)2 + a= ('k, - yfgaQ' ca + gf cflz)2 wherein ca is the code vector from the adaptive codebook, cf is the code vector from the fixed codebook, gaQ is the quantized adaptive gain, gf is a fixed gain factor selected from the set of quantized fixed gain values, gfQ is the quantized fixed gain, said quantized fixed gain being the fixed gain factor that minimizes the criterion DgfQ, gfopT is an optimal fixed gain factor, E, is the energy of a residual signal, and a is a balance factor, and wherein a weighting matching part of the criterion DgfQ is multiptied by a first weighting factor equal to 1-a and an energy matching part of the criterion DgfQ is multiplied by a second weighting factor equal to a.

According to another aspect of the invention, there is provided a speech encoding apparatus, comprising:
an input for receiving an original speech signal;
an output for providing information indicative of a plurality of parameters from which an approximation of the original speech signal can be reconstructed, the plurality of parameters comprising at least a code vector from an adaptive codebook, a code vector from a fixed codebook, a quantized adaptive gain and a quantized fixed gain that are associated with a coded signal;
a controller coupled between said input and said output for providing, in response to the original speech signal, the coded signal representing the original speech signal, said controller further being for selecting the code vector from the adaptive codebook, the code vector from the fixed codebook and the quantized adaptive gain and for determining the quantized fixed gain from a set of quantized fixed gain values by minimizing a criterion defined by:
DgfQ = (1-a). j cfllz= (gfoPT - gf)2 + a= =vfgaQ= ca + gf cfl2)2 wherein ca is the code vector from the adaptive codebook, ef is the code vector from the fixed codebook, gaQ is the quantized adaptive gain, Substitute Sheet 4d gf is a fixed gain factor selected from the set of quantized fixed gain values, gfQ is the quantized fixed gain, said quantized fixed gain being the fixed gain factor that minimizes the criterion DgfQ, gfcpT is an optimal fixed gain factor, Er is the energy of a residual signal, and a is a balance factor, and wherein a weighting matching part of the criterion DgfQ is multiplied by a first weighting factor equal to 1-a and an energy matching part of the criterion DOQ is multiplied by a second weighting factor equal to a; and a balance factor determiner for calculating the balance factor, said balance factor determiner having an output coupled to said controller for providing said balance factor to said controller for use in determining said quantized fixed gain.

Substitute sheet -4e-The present invention advantageously combines waveform matching and energy matching criteria to improve the coding of noise-like signals at lowered bit rates without the disadvantages of multi-mode coding.

BRIEF DESCRIPTION OF THE DRAWINGS
FIGURE 1 illustrates diagrammatically a conventional CELP decoder.
FIGURE 2 illustrates diag~rammatically a conventional CELP encoder.
FIGURE 3 illustcates graphically a balance factor according to the invention.
FIGURE 4 illustrates graphically a specific example of the balance factor of FIGURE 3.
FiGURE 5 illustrates diagrammatioally a pertinent portion of an exemplary CELP encoder according to the invention.
FIGURE 6 is a flow diagram which illustrates exemplary operations of the CELP encoder portion of FIGURE S.
FIGURE 7 illustrates diagrammatically a communication system according to the invention.
DETAILED DESCRIPTION
The present invention combines waveform matching and energy matching criteria into one single criterion D,,,., The balance between waveform matching and energy matching is softly adaptively adjusted by weighting factors:
Dwr = K-Dw+L=DE (E9- 4)
-5-where K and L are weighting factors determining the relative weights between the waveform matching distortion DW and the energy matching distortion DE.
Weighting factors K and L can be respectively set to equal 1-a and a as follows:
DWE = (1-a)=Dw+a=DE (Eq. 5) where a is a balance factor having a value from 0 to 1 to provide the balance between the waveform matching part D W and the energy matching part DE of the criterion. The a value is preferably a function of the voicing level, or periodicity, in the current speech segment, a=a(v) where v is a voicing indicator. A principle sketch of an example of the a(v) function is shown in Figure 3. At voicing levels below a, a = d, at voicing levels above b, a = c, and a decreases gradually from d to c at voicing levels between a and b.

In one specific formulation the criterion of Equation 5 can be expressed as:
D~= (1-a) =1 ISW-CSW~ ~2+a= ( Esw Ecsw) 2 (Eq. 6) where ESw is the energy of the signal Sw and Eaw is the energy of the signal CSw.
Although the criterion of Equation 6 above, or a variation thereof, can be advantageously used for the entire coding process in a CELP coder, significant improvements result when it is used only in the gain quantization part (i.e., step 4 of the encoding method above). Although the description here details the application of the criterion of Equation 6 to gain quantization, it can be employed in the search of the ca and cf codebooks in a similar manner.

Note that EcsW of Equation 6 can be expressed as:
Ecsw =( ICSw 11Z
(Eq. 7) so that Equation 6 can be rewritten as:

Dc,,s=(1-a)=jjSw CSW112+a'( Esw J~CSwl ~2)Z.
-6-(Eq. 8) It can be seen from Equation 1 that:

CSN, = WH=(ga=ca+gf cf). (Eq. 9) Once the code vectors ca and cf are determined, for example using Equation 1 and Steps 1-3 above, the task is to find the corresponding quantized gain values. For vector quantization, these quantized gain values are given as an entry from the codebook of the vector quantizer. This codebook includes plural entries, and each entry includes a pair of quantized gain values, gaQ and gfQ.

Inserting all pairs of quantized gain values gaQ and gfQ from the vector quantizer codebook into Equation 9, and then inserting each resulting CS, into Equation 8, all possible values of DWE in Equation 8 are computed. The gain value pair from the codebook of the vector quantizer giving the least value of DwE
is selected for the quantized gain values.

In several modern coders, predictive quantization is used for the gain values, or at least for the fixed codebook gain value. This is straightforwardly incorporated in Equation 9 because the prediction is done before the search. Instead of plugging codebook gain values into Equation 9, the codebook gain values multiplied by the predicted gain values are plugged into Equation 9. Each resulting CS H, is then inserted in Equation 8 as above.

For scalar quantization of the gain factors, a simple criterion is often used where the optimal gain is quantized directly, i.e., a criterion like:

DsCiQ _ (gorr-g)Z (Eq. 10) is used, where DSGQ is the scalar gain quantization criterion, goP,. is the optimal gain (either gaoPT or gfoP7.) as conventionally determined in Step 2 or 3 above, and g is a quantized gain value from the codebook of either the ga or gf scalar quantizer. The quantized gain value that minimizes DSGQ is selected.

In quantizing the gain factors, the energy matching term may, if desired, be advantageously employed only for the fixed codebook gain since the adaptive
-7-codebook usually plays a minor role for noise-like speech segments. Thus, the criterion of Equation 10 can be used to quantize the adaptive codebook gain while a new criterion DVQ is used to quantize the fixed codebook gain, namely:

DgfQ= (1-a)'I I cf112' (gfoPT-gf) Z+a. (Fll~ - IgaQ Ca+9f'cf1 12) Z

(Eq. l l ) where gfoP, is the optimal gf value determined from Step 3 above, and gaQ is the quantized adaptive codebook gain determined using Equation 10. All quantized gain values from the codebook of the gf scalar quantizer are plugged in as gf in Equation 11, and the quantized gain value that minimizes D., is selected.
The adaptation of the balance factor a is a key to obtaining good performance with the new criterion. As described earlier, a is preferably a function of the voicing level. The coding gain of the adaptive codebook is one example of a good indicator of the voicing level. Examples of voicing level determinations thus include:

vv =101og,o( r 4J2/ I I r-gaoir'ca ,, 2) (Eq. 12) vs = 10 log,o( r I2/ I I r-gaQ-ca I I2) (Eq. 13) where v,,is the voicing level measure for vector quantization, vs is the voicing level measure for scalar quantization, and r is the residual signal defined hereinabove.
Although the voicing level is determined in the residual domain using Equations 12 and 13, the voicing level can also be determined in, for example, the weighted speech domain by substituting SW for r in Equations 12 and 13, and multiplying the ga=ca terms of Equations 12 and 13 by W-H.
To avoid local fluctuation in the v values, the v values can be filtered before mapping to the a domain. For instance, a median filter of the current value and the values for the previous 4 subframes can be used as follows:
-8-vm=median(v, v_,, v_Z, v_3, v4) (Eq. 14) where v_p v_z, v_j, v., are the v values for the previous 4 subframes.

The function shown in Fig. 4 illustrates one example of the mapping from the voicing indicator v., to the balance factor a. This function is mathematically expressed as 0.5 Vm<_0 a(Vm) 0.5 - 0.25 = V,,, 0<vm<2.0 0 vn,z2.0 (Eq. 15) Note that the maximum value of a is less than 1, meaning that full energy matching never occurs, and some waveform matching always remains in the criterion (see Equation 5).
At speech onsets, when the energy of the signal increases dramatically, the adaptive codebook coding gain is often small due to the fact that the adaptive codebook does not contain relevant signals. However, waveform matching is important at onsets and therefor a is forced to zero if an onset is detected.
A simple onset detection based on the optimal fixed codebook gain can be used as follows:
a(vm) = 0 if gfopT > 2.0=gfoP.i._1 (Eq. 16) where gfoPT, is the optimal fixed codebook gain determined in Step 3 above for the previous subframe.
It is also advantageous to limit the increase in the a value when it was zero in the previous subframe. This can be implemented by simply dividing the a value by a suitable number, e.g., 2.0 when the previous a value was zero. Artifacts caused by moving from pure waveform matching to more energy matching are thereby avoided.
Also, once the balance factor a has been determined using Equations 15 and 16, it can be advantageously filtered, for example, by averaging it with a values of previous subfrarnes.
-9-As mentioned above, Equation 6 (and thus Equations 8 and 9) can also be used to select the adaptive and fixed codebook vectors ca and cf. Because the adaptive codebook vector ca is not yet known, the voicing measures of Equations 12 and cannot be calculated, so the balance factor a of Equation 15 also cannot be calculated.
Thus, in order to use Equations 8 and 9 for the fixed and adaptive codebook searches, the balance factor a is preferably set to a value which has been empirically determined to yield the desired results for noise-like signals. Once the balance factor a has been empirically determined, then the fixed and adaptive codebook searches can proceed in the manner set forth in Steps 1-4 above, but using the criterion of Equations 8 and 9. Alternatively, after ca and ga are determined in Step 2 using an empirically determined a value, then Equations 12-15 can be used as appropriate to determine a value of a to be used in Equation 8 during the Step 3 search of the fixed codebook.
FIGURE 5 is a block diagram representation of an exemplary portion of a CELP speech encoder according to the invention. The encoder portion of FIGURE
5 includes a criteria controller 51 having an input for receiving the uncoded speech signal, and also coupled for communication with the fixed and adaptive codebooks 61 and 62, and with gain quantizer codebooks 50, 54 and 60. The criteria controller 51 is capable ofperforming all conventional operations associated with the CELP
encoder design of FIGURE 2, including implementing the conventional criteria represented by Equations 1-3 and 10 above, and performing the conventional operations described in Steps 1-4 above.

In addition to the above-described conventional operations, criteria controller 51 is also capable of implementing the operations described above with respect to Equations 4-9 and 11-16. The criteria controller 51 provides a voicing determiner 53 with ca as determined in Step 2 above, and gaoP,.(or gaQ if scalar quantization is used) as determined by executing Steps 1-4 above. The criteria controller further applies the inverse synthesis filter H"' to the uncoded speech signal to thereby determine the residual signal r, which is also input to the voicing determiner 53.
The voicing determiner 53 responds to its above-described inputs to determine the voicing level indicator v according to Equation 12 (vector quantization) or Equation 13 (scalar quantization). The voicing level indicator v is provided to the
-10-input of a filter 55 which subjects the voicing level indicator v to a filtering operation (such as the median filtering described above), thereby producing a filtered voicing level indicator vf as an output. For median filtering, the filter 55 may include a memory portion 56 as shown for storing the voicing level indicators of previous subframes.

The filtered voicing level indicator vfoutput from filter 55 is input to a balance factor determiner 57. The balance factor determiner 57 uses the filtered voicing level indicator v f to determine the balance factor a, for example in the manner described above with respect to Equation 15 (where vrepresents a specific example of vf of FIGURE 5) and FIGURE 4. The criteria controller 51 input to the balance factor determiner 57 gfoPT for the current subframe, and this value can be stored in a memory 58 of the balance factor determiner 57 for use in implementing Equation 16.
The balance factor determiner also includes a memory 59 for storing the a value of each subframe (or at least a values of zero) in order to permit the balance factor determiner 57 to limit the increase in the a value when the a value associated with the previous subframe was zero.

Once the criteria controller 51 has obtained the synthesis filter coefficients, and has applied the desired criteria to determine the codebook vectors and the associated quantized gain values, then information indicative of these parameters is output from the criteria controller at 52 to be transmitted across a communication channel.
FIGURE 5 also illustrates conceptually the codebook 50 of a vector quantizer, and the codebooks 54 and 60 of respective scaler quantizers for the adaptive codebook gain value ga and the fixed codebook gain value gf. As described above, the vector quantizer codebook 50 includes a plurality of entries, each entry including a pair of quantized gain values ga, and gfQ. The scalar quantizer codebooks 54 and 60 each include one quantized gain value per entry.
FIGURE 6 illustrates in flow diagram format exemplary operations (as described in detail above) of the example encoder portion of FIGURE 5. When a new subframe of uncoded speech is received at 63, Steps 1-4 above are executed according to a desired criterion at 64 to determine ca, ga, cf and gf. Thereafter at 65, the voicing measure v is determined, and the balance factor a is thereafter determined at 66.
-11-Thereafter, at 67, the balance factor is used to define the criterion for gain factor quantization, DwE, in terms of waveform matching and energy matching. If vector quantization is being used at 68, then the combined waveform matching/energy matching criterion DWE is used to quantize both of the gain factors at 69. If scalar quantization is being used, then at 70 the adaptive codebook gain ga is quantized using DSGQ of Equation 10, and at 71 the fixed codebook gain gf is quantized using the combined waveform matching/energy matching criterion DgfQ of Equation 11.
After the gain factors have been quantized, the next subframe is awaited at 63.
FIGURE 7 is a block diagram of an example communication system including a speech encoder according to the present invention. In FIGURE 7, an encoder according to the present invention is provided in a transceiver 73 which communicates with a transceiver 74 via a communication channe175. The encoder 72 receives an uncoded speech signal, and provides to the channel 75 information from which a conventional decoder 76 (such as described above with respect to FIGURE 1) in transceiver 74 can reconstruct the original speech signal. As one example, the transceivers 73 and 74 of FIGURE 7 could be cellular telephones, and the channe175 could be a communication channel through a cellular telephone network. Other applications for the speech encoder 72 of the present invention are numerous and readily apparent.

It will be apparent to workers in the art that a speech encoder according to the invention can be readily implemented using, for example, a suitably programmed digital signal processor (DSP) or other data processing device, either alone or in combination with external support logic.
The new speech coding criterion softly combines waveform matching and energy matching. Therefore, the need to use either one or the other is avoided, but a suitable mixture of the criteria can be employed. The problem of wrong mode decisions between criteria is avoided. The adaptive nature of the criterion makes it possible to smoothly adjust the balance of the waveform and energy matching.
Therefore, artifacts due to drastically changing the criterion are controlled.
-12-Some waveform matching can always be maintained in the new criterion. The problem of a completely unsuitable signal with a high level sounding like a noise-burst can thus be avoided.
Although exemplary embodiments of the present invention have been described above in detail, this does not limit the scope of the invention, which can be practiced in a variety of embodiments.

Claims (42)

WHAT IS CLAIMED IS:
1. A method for producing a plurality of parameters from an original speech signal in a speech encoder, wherein an approximation of the original speech signal can be reconstructed from the plurality of parameters, the method comprising the steps of:
generating in response to the original speech signal a coded signal representing the original speech signal;
determining a first difference between a waveform associated with the original speech signal and a waveform associated with the coded signal;
determining a second difference between an energy parameter derived from the original speech signal and a corresponding energy parameter associated with the coded signal; and determining at least one of the plurality of parameters from a combination of the first and second differences.
2. The method of Claim 1, wherein said step of determining at least one of the plurality of parameters comprises the step of assigning to the first and second differences relative degrees of importance in the determination of the at least one parameter.
3. The method of Claim 2, wherein said step of assigning to the first and second differences relative degrees of importance comprises the step of calculating a balance factor indicative of the relative degrees of importance of the first and second differences.
4. The method of Claim 3, further comprising the step of using the balance factor to determine first and second weighting factors respectively associated with the first and second differences, and wherein said step of determining at least one of the plurality of parameters comprises multiplying the first and second differences by the first and second weighting factors, respectively.
5. The method of Claim 4, wherein said step of using the balance factor to determine first and second weighting factors comprises the step of selectively setting one of the weighting factors to zero.
6. The method of Claim 5, wherein said step of selectively setting one of the weighting factors to zero comprises detecting a speech onset in the original speech signal, and setting the second weighting factor to zero in response to detection of the speech onset.
7. The method of Claim 3, wherein said step of calculating the balance factor comprises the step of calculating the balance factor based on at least one previously calculated balance factor.
8. The method of Claim 7, wherein said step of calculating the balance factor based on a previously calculated balance factor comprises limiting the magnitude of the balance factor in response to a previously calculated balance factor having a predetermined magnitude.
9. The method of Claim 3, wherein said step of calculating the balance factor comprises the steps of determining a voicing level associated with the original speech signal, and calculating the balance factor as a function of the voicing level.
10. The method of Claim 9, wherein said step of determining the voicing level comprises the step of applying a filtering operation to the voicing level to produce a filtered voicing level, and wherein said step of calculating the balance factor as a function of the voicing level comprises calculating the balance factor as a function of the filtered voicing level.
11. The method of Claim 10, wherein said step of applying a filtering operation to the voicing level comprises applying a median filtering operation comprising determining a median voicing level among a group of voicing levels comprising the voicing level to which the filtering operation is applied and a plurality of previously determined voicing levels associated with the original speech signal.
12. The method of Claim 2, wherein said step of assigning to the first and second differences relative degrees of importance comprises the step of determining first and second weighting factors respectively associated with the first and second differences comprising determining a voicing level associated with the original speech signal, and determining the weighting factors as a function of the voicing level.
13. The method of Claim 12, wherein said step of determining the first and second weighting factors comprises making the first weighting factor larger than the second weighting factor in response to a first voicing level, and making the second weighting factor larger than the first weighting factor in response to a second voicing level that is lower than the first voicing level.
14. The method of Claim 1, wherein said step of determining at least one of the plurality of parameters comprises determining a quantized gain value for use in reconstructing the original speech signal according to a Code Excited Linear Prediction speech coding process.
15. A speech encoding apparatus, comprising:
an input for receiving an original speech signal;
an output for providing information indicative of a plurality of parameters from which an approximation of the original speech signal can be reconstructed; and a controller coupled between said input and said output for providing a coded signal in response to the original speech signal, said coded signal representing the original speech signal, said controller further being for determining at least one of said plurality of parameters from a combination of a first and a second difference between the original speech signal and the coded signal, wherein said first difference is a difference between a waveform associated with the original speech signal and a waveform associated with the coded signal, and wherein the second difference is a difference between an energy parameter derived from the original speech signal and a corresponding energy parameter associated with the coded signal.
16. The apparatus of Claim 15, comprising a balance factor determiner for calculating a balance factor indicative of relative degrees of importance of the first and second differences in determining said at least one parameter, said balance factor determiner having an output coupled to said controller for providing said balance factor to said controller for use in determining said at least one parameter.
17. The apparatus of Claim 16, comprising a voicing level determiner coupled to said input for determining a voicing level of the original speech signal, said voicing level determiner having an output coupled to an input of said balance factor determiner for providing the voicing level to the balance factor determiner, said balance factor determiner operable to determine said balance factor in response to said voicing level information.
18. The apparatus of Claim 17, comprising a filter coupled between said output of said voicing level determiner and said input of said balance factor determiner for receiving the voicing level from said voicing level determiner and for providing to the balance factor determiner a filtered voicing level.
19. The apparatus of Claim 18, wherein said filter is a median filter.
20. The apparatus of Claim 16, wherein said controller is responsive to said balance factor for determining first and second weighting factors respectively associated with the first and second differences.
21. The apparatus of Claim 20, wherein said controller is operable to multiply the first and second differences respectively by the first and second weighting factors in determination of said at least one parameter.
22. The apparatus of Claim 21, wherein said controller is operable to set the second weighting factor to zero in response to a speech onset in the original speech signal.
23. The apparatus of Claim 16, wherein said balance factor determiner is operable to calculate the balance factor based on at least one previously calculated balance factor.
24. The apparatus of Claim 23, wherein said balance factor determiner is operable to limit the magnitude of the balance factor responsive to a previously calculated balance factor having a predetermined magnitude.
25. The apparatus of Claim 15, wherein said speech encoding apparatus comprises a Code Excited Linear Prediction speech encoder, and wherein said at least one parameter is a quantized gain value.
26. A transceiver apparatus for use in a communication system, comprising:
an input for receiving a user input stimulus;

an output for providing an output signal to a communication channel for transmission to a receiver via the communication channel; and a speech encoding apparatus having an input coupled to said transceiver input and having an output coupled to said transceiver output, said input of said speech encoding apparatus being for receiving an original speech signal from said transceiver input, said output of said speech encoding apparatus being for providing to said transceiver output information indicative of a plurality of parameters from which an approximation of the original speech signal can be reconstructed at the receiver, said speech encoding apparatus comprising a controller coupled between said input and said output thereof for providing in response to the original speech signal a coded signal representing the original speech signal, said controller further being for determining at least one of said plurality of parameters from a combination of a first and a second difference between the original speech signal and the coded signal, wherein said first difference is a difference between a waveform associated with the original speech signal and a waveform associated with the coded signal, and wherein the second difference is a difference between an energy parameter derived from the original speech signal and a corresponding energy parameter associated with the coded signal.
27. The apparatus of Claim 26, wherein the transceiver apparatus forms a portion of a cellular telephone.
28. A method for producing a plurality of parameters from an original speech signal in a speech encoder, wherein an approximation of the original speech signal can be reconstructed from the plurality of parameters, the plurality of parameters comprising at least a code vector from an adaptive codebook, a code vector from a fixed codebook, a quantized adaptive gain and a quantized fixed gain that are associated with a coded signal representing the original speech signal, the method comprising the steps of:
generating, in response to the original speech signal, the coded signal by selecting the code vector from the adaptive codebook, the code vector from the fixed codebook and the quantized adaptive gain; and determining the quantized fixed gain from a set of quantized fixed gain values by minimizing a criterion defined by:

wherein ca is the code vector from the adaptive codebook, cf is the code vector from the fixed codebook, ga Q is the quantized adaptive gain, gf is a fixed gain factor selected from the set of quantized fixed gain values, gf Q is the quantized fixed gain, said quantized fixed gain being the fixed gain factor that minimizes the criterion D gfQ, gf OPT is an optimal fixed gain factor, E r, is the energy of a residual signal, and .alpha. is a balance factor, and wherein a waveform matching part of the criterion D gfQ is multiplied by a first weighting factor equal to 1-.alpha. and an energy matching part of the criterion D gfQ
is multiplied by a second weighting factor equal to .alpha..
29. The method of Claim 28, further comprising the steps of:
detecting a speech onset in the original speech signal; and setting the second weighting factor to zero in response to detection of the speech onset.
30. The method of Claim 28, further comprising the step of calculating the balance factor based on at least one previously calculated balance factor.
31. The method of Claim 30, wherein said step of calculating the balance factor based on a previously calculated balance factor comprises limiting the magnitude of the balance factor in response to a previously calculated balance factor having a predetermined magnitude.
32. The method of Claim 28, further comprising the steps of:
determining a voicing level associated with the original speech signal; and calculating the balance factor as a function of the voicing level.
33. The method of Claim 32, wherein the first weighting factor is made larger than the second weighting factor in response to a first voicing level, and the second weighting factor is made larger than the first weighting factor in response to a second voicing level that is lower than the first voicing level.
34. The method of Claim 28, further comprising the steps of:
determining a voicing level associated with the original speech signal;

applying a filtering operation to the voicing level to produce a filtered voicing level; and calculating the balance factor as a function of the filtered voicing level.
35. The method of Claim 34, wherein said step of applying a filtering operation comprises applying a median filtering operation, comprising determining a median voicing level among a group of voicing levels comprising the voicing level to which the filtering operation is applied and a plurality of previously determined voicing levels associated with the original speech signal.
36. A speech encoding apparatus, comprising:
an input for receiving an original speech signal;
an output for providing information indicative of a plurality of parameters from which an approximation of the original speech signal can be reconstructed, the plurality of parameters comprising at least a code vector from an adaptive codebook, a code vector from a fixed codebook, a quantized adaptive gain and a quantized fixed gain that are associated with a coded signal;
a controller coupled between said input and said output for providing, in response to the original speech signal, the coded signal representing the original speech signal, said controller further being for selecting the code vector from the adaptive codebook, the code vector from the fixed codebook and the quantized adaptive gain and for determining the quantized fixed gain from a set of quantized fixed gain values by minimizing a criterion defined by:

wherein ca is the code vector from the adaptive codebook, cf is the code vector from the fixed codebook, ga Q is the quantized adaptive gain, gf is a fixed gain factor selected from the set of quantized fixed gain values, gf Q is the quantized fixed gain, said quantized fixed gain being the fixed gain factor that minimizes the criterion D gfQ, gf OPT is an optimal fixed gain factor, E r is the energy of a residual signal, and .alpha. is a balance factor, and wherein a waveform matching part of the criterion D gfQ is multiplied by a first weighting factor equal to 1-.alpha. and an energy matching part of the criterion D gfQ
is multiplied by a second weighting factor equal to .alpha.; and a balance factor determiner for calculating the balance factor, said balance factor determiner having an output coupled to said controller for providing said balance factor to said controller for use in determining said quantized fixed gain.
37. The apparatus of Claim 36, comprising a voicing level determiner coupled to said input for determining a voicing level of the original speech signal, said voicing level determiner having an output coupled to an input of said balance factor determiner for providing the voicing level to the balance factor determiner, said balance factor determiner operable to determine said balance factor in response to said voicing level information.
38. The apparatus of Claim 37, comprising a filter coupled between said output of said voicing level determiner and said input of said balance factor determiner for receiving the voicing level from said voicing level determiner and for providing to the balance factor determiner a filtered voicing level.
39. The apparatus of Claim 38, wherein said filter is a median filter.
40. The apparatus of Claim 36, wherein said controller is operable to set the second weighting factor to zero in response to a speech onset in the original speech signal.
41. The apparatus of Claim 36, wherein said balance factor determiner is operable to calculate the balance factor based on at least one previously calculated balance factor.
42. The apparatus of Claim 41, wherein said balance factor determiner is operable to limit the magnitude of the balance factor responsive to a previously calculated balance factor having a predetermined magnitude.
CA002342353A 1998-09-01 1999-08-06 An adaptive criterion for speech coding Expired - Lifetime CA2342353C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US09/144,961 1998-09-01
US09/144,961 US6192335B1 (en) 1998-09-01 1998-09-01 Adaptive combining of multi-mode coding for voiced speech and noise-like signals
PCT/SE1999/001350 WO2000013174A1 (en) 1998-09-01 1999-08-06 An adaptive criterion for speech coding

Publications (2)

Publication Number Publication Date
CA2342353A1 CA2342353A1 (en) 2000-03-09
CA2342353C true CA2342353C (en) 2009-10-20

Family

ID=22510960

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002342353A Expired - Lifetime CA2342353C (en) 1998-09-01 1999-08-06 An adaptive criterion for speech coding

Country Status (15)

Country Link
US (1) US6192335B1 (en)
EP (1) EP1114414B1 (en)
JP (1) JP3483853B2 (en)
KR (1) KR100421648B1 (en)
CN (1) CN1192357C (en)
AR (1) AR027812A1 (en)
AU (1) AU774998B2 (en)
BR (1) BR9913292B1 (en)
CA (1) CA2342353C (en)
DE (1) DE69906330T2 (en)
MY (1) MY123316A (en)
RU (1) RU2223555C2 (en)
TW (1) TW440812B (en)
WO (1) WO2000013174A1 (en)
ZA (1) ZA200101666B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0005515D0 (en) * 2000-03-08 2000-04-26 Univ Glasgow Improved vector quantization of images
DE10026872A1 (en) 2000-04-28 2001-10-31 Deutsche Telekom Ag Procedure for calculating a voice activity decision (Voice Activity Detector)
US7254532B2 (en) 2000-04-28 2007-08-07 Deutsche Telekom Ag Method for making a voice activity decision
US20030028386A1 (en) * 2001-04-02 2003-02-06 Zinser Richard L. Compressed domain universal transcoder
DE10124420C1 (en) * 2001-05-18 2002-11-28 Siemens Ag Coding method for transmission of speech signals uses analysis-through-synthesis method with adaption of amplification factor for excitation signal generator
FR2867649A1 (en) * 2003-12-10 2005-09-16 France Telecom OPTIMIZED MULTIPLE CODING METHOD
CN100358534C (en) * 2005-11-21 2008-01-02 北京百林康源生物技术有限责任公司 Use of malposed double-strauded oligo nucleotide for preparing medicine for treating avian flu virus infection
US8532984B2 (en) 2006-07-31 2013-09-10 Qualcomm Incorporated Systems, methods, and apparatus for wideband encoding and decoding of active frames
CA2666546C (en) * 2006-10-24 2016-01-19 Voiceage Corporation Method and device for coding transition frames in speech signals
CN101192411B (en) * 2007-12-27 2010-06-02 北京中星微电子有限公司 Large distance microphone array noise cancellation method and noise cancellation system
EP2296143B1 (en) * 2008-06-27 2018-01-10 III Holdings 12, LLC Audio signal decoding device and balance adjustment method for audio signal decoding device
RU2533439C2 (en) * 2009-09-02 2014-11-20 Эппл Инк Apparatus and method for encoding using reduced codebook with adaptive resetting
MX2012011943A (en) 2010-04-14 2013-01-24 Voiceage Corp Flexible and scalable combined innovation codebook for use in celp coder and decoder.
JP6366705B2 (en) 2013-10-18 2018-08-01 フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン Concept of encoding / decoding an audio signal using deterministic and noise-like information
AU2014336356B2 (en) 2013-10-18 2017-04-06 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Concept for encoding an audio signal and decoding an audio signal using speech related spectral shaping information

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4969193A (en) * 1985-08-29 1990-11-06 Scott Instruments Corporation Method and apparatus for generating a signal transformation and the use thereof in signal processing
US5060269A (en) 1989-05-18 1991-10-22 General Electric Company Hybrid switched multi-pulse/stochastic speech coding technique
US5255339A (en) 1991-07-19 1993-10-19 Motorola, Inc. Low bit rate vocoder means and method
US5657418A (en) 1991-09-05 1997-08-12 Motorola, Inc. Provision of speech coder gain information using multiple coding modes
WO1994025959A1 (en) 1993-04-29 1994-11-10 Unisearch Limited Use of an auditory model to improve quality or lower the bit rate of speech synthesis systems
CA2179194A1 (en) * 1993-12-16 1995-06-29 Andrew Wilson Howitt System and method for performing voice compression
US5517595A (en) * 1994-02-08 1996-05-14 At&T Corp. Decomposition in noise and periodic signal waveforms in waveform interpolation
US5715365A (en) * 1994-04-04 1998-02-03 Digital Voice Systems, Inc. Estimation of excitation parameters
US5602959A (en) * 1994-12-05 1997-02-11 Motorola, Inc. Method and apparatus for characterization and reconstruction of speech excitation waveforms
FR2729247A1 (en) * 1995-01-06 1996-07-12 Matra Communication SYNTHETIC ANALYSIS-SPEECH CODING METHOD
FR2729246A1 (en) * 1995-01-06 1996-07-12 Matra Communication SYNTHETIC ANALYSIS-SPEECH CODING METHOD
FR2729244B1 (en) * 1995-01-06 1997-03-28 Matra Communication SYNTHESIS ANALYSIS SPEECH CODING METHOD
AU696092B2 (en) * 1995-01-12 1998-09-03 Digital Voice Systems, Inc. Estimation of excitation parameters
US5649051A (en) * 1995-06-01 1997-07-15 Rothweiler; Joseph Harvey Constant data rate speech encoder for limited bandwidth path
US5668925A (en) * 1995-06-01 1997-09-16 Martin Marietta Corporation Low data rate speech encoder with mixed excitation
FR2739995B1 (en) 1995-10-13 1997-12-12 Massaloux Dominique METHOD AND DEVICE FOR CREATING COMFORT NOISE IN A DIGITAL SPEECH TRANSMISSION SYSTEM
US5819224A (en) * 1996-04-01 1998-10-06 The Victoria University Of Manchester Split matrix quantization
JPH10105195A (en) * 1996-09-27 1998-04-24 Sony Corp Pitch detecting method and method and device for encoding speech signal
US6148282A (en) 1997-01-02 2000-11-14 Texas Instruments Incorporated Multimodal code-excited linear prediction (CELP) coder and method using peakiness measure

Also Published As

Publication number Publication date
EP1114414B1 (en) 2003-03-26
KR20010073069A (en) 2001-07-31
DE69906330D1 (en) 2003-04-30
TW440812B (en) 2001-06-16
ZA200101666B (en) 2001-09-25
DE69906330T2 (en) 2003-11-27
RU2223555C2 (en) 2004-02-10
JP3483853B2 (en) 2004-01-06
BR9913292B1 (en) 2013-04-09
WO2000013174A1 (en) 2000-03-09
US6192335B1 (en) 2001-02-20
EP1114414A1 (en) 2001-07-11
CN1325529A (en) 2001-12-05
KR100421648B1 (en) 2004-03-11
AU5888799A (en) 2000-03-21
CA2342353A1 (en) 2000-03-09
AU774998B2 (en) 2004-07-15
AR027812A1 (en) 2003-04-16
CN1192357C (en) 2005-03-09
BR9913292A (en) 2001-09-25
MY123316A (en) 2006-05-31
JP2002524760A (en) 2002-08-06

Similar Documents

Publication Publication Date Title
KR100389692B1 (en) A method of adapting the noise masking level to the speech coder of analytical method by synthesis using short-term perception calibrator filter
RU2257556C2 (en) Method for quantizing amplification coefficients for linear prognosis speech encoder with code excitation
KR100264863B1 (en) Method for speech coding based on a celp model
US5138661A (en) Linear predictive codeword excited speech synthesizer
CA2165484C (en) A low rate multi-mode celp codec that uses backward prediction
CA2342353C (en) An adaptive criterion for speech coding
US20070088543A1 (en) Multimode speech coding apparatus and decoding apparatus
US5694426A (en) Signal quantizer with reduced output fluctuation
KR20010101422A (en) Wide band speech synthesis by means of a mapping matrix
WO2024021747A1 (en) Sound coding method, sound decoding method, and related apparatuses and system
JPH0782360B2 (en) Speech analysis and synthesis method
Tzeng Analysis-by-synthesis linear predictive speech coding at 2.4 kbit/s
Zinser et al. CELP coding at 4.0 kb/sec and below: Improvements to FS-1016
JP3089967B2 (en) Audio coding device
Tseng An analysis-by-synthesis linear predictive model for narrowband speech coding
JP3192051B2 (en) Audio coding device
CA2118986C (en) Speech coding system
KR950001437B1 (en) Method of voice decoding
MXPA01002144A (en) An adaptive criterion for speech coding
Swaminathan et al. A robust low rate voice codec for wireless communications
JP3270146B2 (en) Audio coding device
JPH09269799A (en) Voice coding circuit provided with noise suppression function
JPH06208398A (en) Generation method for sound source waveform
WO2001009880A1 (en) Multimode vselp speech coder
Sadek et al. An enhanced variable bit-rate CELP speech coder

Legal Events

Date Code Title Description
EEER Examination request
MKEX Expiry

Effective date: 20190806