CA2621175A1 - Systems and methods for audio processing - Google Patents

Systems and methods for audio processing Download PDF

Info

Publication number
CA2621175A1
CA2621175A1 CA002621175A CA2621175A CA2621175A1 CA 2621175 A1 CA2621175 A1 CA 2621175A1 CA 002621175 A CA002621175 A CA 002621175A CA 2621175 A CA2621175 A CA 2621175A CA 2621175 A1 CA2621175 A1 CA 2621175A1
Authority
CA
Canada
Prior art keywords
signals
listener
sound source
digital
filters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CA002621175A
Other languages
French (fr)
Other versions
CA2621175C (en
Inventor
Wen Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DTS LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2621175A1 publication Critical patent/CA2621175A1/en
Application granted granted Critical
Publication of CA2621175C publication Critical patent/CA2621175C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • H04S1/005For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/007Two-channel systems in which the audio signals are in digital form
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S5/00Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation 

Abstract

Systems and methods for audio signal processing are disclosed, where a discrete number of simple digital filters (266) are generated for particular portions of an audio frequency range. Studies have shown that certain frequency ranges are particularly important for human ears' location-discriminating capability, while other ranges are generally ignored. Head-Related Transfer Functions (HRTFs) (170) are examples response functions that characterize how ears perceive sound positioned at different locations. By selecting one or more "location-critical" portions (172, 174) of such response functions, one can construct simple filters (180) that can be used to simulate hearing where location-discriminating capability is substantially maintained.
Because the filters can be simple, they can be implemented in devices (550, 562) having limited computing power and resources to provide location-discrimination responses that form the basis for many desirable audio effects.

Description

SYSTEMS AND METHODS FOR AUDIO PROCESSING

PRIORITY CLAIIVI
[0001] This application claims the benefit of priority Lulder 35 U.S.C.
119(e) of U.S. Provisional Application Nunlber 60/716,588 filed on Septeinber 13, 2005 and titled SYSTEMS AND METHODS FOR AUDIO PROCESSING, the entirety of which is incorporated herein by reference.

BACKGROUND
Field [0002] The present disclosure generaily relates to audio signal processing, v.ld more particularly, to systeins and methods for filtering location-critical poi-tions of audible frequency range to simulate three-dimensional listening effects.

Describtion of the Related Art [0003] Sound signals can be processed to provide enhanced listening effects.
For exainple, various processing tecluliques can make a sourid source be perceived as being positioned or moving relative to a listener. Such techniques allow the listener to enjoy a simulated three-diznensional listening experience even when using speakers having limited configtlration and perforn-iance.
[0004] However, many sound perception enhancing techniques are coinplicated, and often require substaiitial computing power and resources.
Thus, use of these tecluiiques are iinpractical or impossible when applied to many electroivic devices having limited coinputing power and resources. Much of the portable devices such as cell phones, PDAs, MP3 players, and the like, generally fall under this categoiy.

SUMMARY
[0005] At least some of the foregoing problenls can be addressed by various einbodiments of systeins and methods for audio signal processing as disclosed herein. Iii one embodiment, a discrete niunber of simple digital filters caii be generated for particular poi-tions of an audio frequency range. Studies have shown that certain frequency ranges are particularly iinportant for huinan ears' location-discrinlinating capability, while other ranges are generally ignored. Head-Related Transfer Ftinctions (HRTFs) are exaniples response finictions that characterize how ears perceive sound positioned at different locations. By selecting one or inore "location-critical" portions of such response fiulctions, one can construct simple filters that can be used to simulate hearing where location-discriminating capability is substantially inaintained. Because the filters can be siinple, they can be iinpleinented in devices having limited computiulg power and resources to provide location-discrimination responses that fonn the basis for niany desirable audio effects.
[0006] One embodiment of the present disclosure relates to a method for processing digital audio signals. The method includes receiving orie or more digital signals, with each of the one or more digital signals having infonnation about spatial position of a sottncl source relative to a listener. The metlzod furtlier includes selecting one or more digital filters, with each of the one or more digital filters being fonned from a pai-ticular range of a hearing response fiulction. The metliod fi.trther includes applying the one or more filters to the one or more digital signals so as to yield corresponding one or more filtered signals, witll each of the oiie or more filtered signals having a siinulated effect of the hearing response fi.uiction applied to the sotuld source.
[0007] In one einbodiment, the hearing response fiu-iction includes a head-related transfer ftulction (HRTF). Iil one einbodiinent, the particular range includes a particular range of frequency within the HRTF. In one einbodiment, the particular range of frequency is substantially within or overlaps with a range of frequency that provides a location-discriminating sensitivity to an average htunan's hearing that is greater than an average sensitivity among an audible frequency. In one einbodiment, tlie particular range of frequency includes or substantially overlaps witli a peak stntcttire in the HRTF. In one embodiment, the peak stititcture is substantially witliin or overlaps with a range of frequency between about 2.5 KHz and about 7.5 KHz. In one embodiment, the peak structure is substantially within or overlaps with a range of frequency between about 8.5 KHz and about 18 KHz.
[0008] In one embodiment, the one or more digital signals include left and right digital signals to be output to left and right spealcers. In one einbodiment, the left and riglit digital signals are adjusted for interaural time difference (ITD) based on the spatial position of the sound source relative to the listener. Iii one einbodiunent, the ITD
adjustment includes receiving a mono input signal having infonnation about the spatial position of the sound source. The ITD adjustment furtlier includes detenniiiing a time difference value based on the spatial information. The ITD adjustment fi.ii-ther includes generating left and right signals by introducing the time difference value to the mono input signal.
[0009] h-i one embodiment, the time difference value includes a quantity that is proportional to absolute value of sin8 coscp, where 6 represents an azimuthal angle of the sound source relative to the fiont of the listener, and cp represents an elevation angle of the sound source relative to a horizontal plane defined by the listener's ears and the front direction. In one embodiment, the quantity is expressed as (Maxiinum ITD_Samples_per_Sampling_Rate - 1) sinO coscpl.
[0010] In one embodiment, the detennination of time difference value is perfoiined when the spatial position of the sound source changes. h-i one embodiment, the method fiirther includes perfonning a crossfade transition of the time difference value between the previous value and the current vah.ie. In one embodiment, the crossfade transition includes changing the time difference value for use in the generation of left and right signals from the previous value to the cuiTent value during a plurality of processing cycles.
[0011] h-i one embodiment, the one or more filtered signals include left and right filtered signals to be output to left and rigllt speakers. In one einbodiment, the method fiutlzer includes adjusting each of the left and right filtered signals for interaural intensity difference (IID) to account for any intensity differences that may exist and not accounted for by the application of one or more filters. h-i one embodiment, the adjustnzent of the left and right filtered signals for IID includes detennining whether the sound source is positioned at left or right relative to the listener. The adjustinent further includes assigning as a weaker signal the left or right filtered signal that is on the opposite side as the sound source. The adjustinent fiuther includes assigning as a stronger signal the otlier of the left or right filtered signal. The adjustnlent fi.irther includes adjusting the weaker signal by a first compensation. The adjustment ftu-ther includes adjustiilg the stronger signal by a second coinpensation.
[0012] In one embodiinent, the first coinpensation includes a compensation value that is proportional to cosO, where 0 represents an azimuthal angle of the sotu-id source relative to the front of the listener. Iii one embodiment, the compensation value is noi-lnalized such that if the sound source is substantially directly in the front, the compensation value ca.n be an original filter level difference, and if the sound source is substantially directly on the stronger side, the compensation value is approximately 1 so that no gain adjustment is made to the weaker signal.
[0013] In one einbodiinent, the second compensation includes a conipensation value that is proportional to sinO, where 0 represents an aziinuthal angle of the sotuld source relative to the front of the listener. lii one ernbodiment, the compensation value is normalized sucll that if the sound source is substantially directly in the front, the compensation value is approxiinately 1 so that no gain adjustinent is made to the stronger signal, and if the sound source is substantially directly on the weaker side, the compensation vahze is approximately 2 thereby providing an approximately 6dB
gain coznpensation to approximately match an overall loudness at different values of the azinluthal angle.
[0014] fil one einbodiment, the adjustment of the left and right filtered signals for IID is perfonned when new one or more digital filters are applied to the left and right filtered signals due to selected nlovements of the sound source. In one einbodiment, the method fi.irther includes performing a crossfade transition of the first and second compensation vah.tes between the previous values and the curTent values. In one einbodiment, the crossfade transition includes changing the first and second coinpensation values during a plurality of processing cycles.
[0015] In one einbodiment, the one or more digital filters include a plurality of digital filters. In one embodimenfi, each of the one or more digital signals is split into the sanie number of signals as the nuinber of the plurality of digital filters such that the plurality of digital filters are applied in parallel to the plurality of split signals. In one einbodinlent, the each of one = or inore filtered signals is obtained by combining the plurality of split signals filtered by the plurality of digital filters. Iii one embodiment, the coinbining includes siumning of the plurality of split signals.
[0016] In one embodiment, the plurality of digital filters include first and second digital filters. Ti1 one embodiment, each of the first and second digital filters includes a filter that yields a response that is substantially maxiznally flat in a passband portion and rolls off towards substantially zero in a stopband portion of the hearing response fiinction. liz one embodiment, each of the first and second digital filters includes a Butterworth filter. In one embodiment, the passband portion for one of the first and second digital filters is defined by a fiequency range between about 2.5 I~Hz and about 7.5 KHz. In one einbodinzent, the passband portion for one of the first and second digital filters is defined by a fiequency range between about 8.5 ICHz and about 18 KHz.
[0017] In one embodiment, the selection of the one or more digital filters is based on a finite nuinber of geoinetric positions about the listener. Iii one embod'rment, the geometric positions include a plurality of llemi-planes, eacli hemi-plane defined by an edge along a direction between the ears of the listener and by an elevation angle cp relative to a horizontal plane defined by the ears and the front direction for the listener. In one embodiment, the pltirality of hemi-planes are grouped into one or more fiont hemi-planes and one or more rear herni-planes. h-i one embodiment, the front hemi-planes include hemi-planes at fiont of the listener and at elevation angles of approximately 0 and +/- 45 degrees, and the rear hemi-planes include hemi-planes at rear of the listener and at elevation angles of approximately 0 and +/- 45 degrees.
[0018] In one einbodinlent, the method itu-ther includes perforining at least one of the following processing steps either before the receiving of the one or more digital signals or after the applying of the one or more filters: sainple rate conversion, Doppler adjustment for sound source velocity, distance adjustinent to account for distance of the sound sotuce to the listener, orientation adjustxnent to accotuit for orientation of the listener's head relative to the sound source, or reverberation adjustnlent.
[0019] hi one embodiment, the application of the one or more digital filters to the one or more digital signals simulates an effect of motion of the soui.id sotirce about the listener.
[0020] In one embodiinent, the application of the one or more digital filters to the one or more digital signals siinulates an effect of placing the sound source at a selected location about the listener. In one enibodiment, the nlethod fiirther includes simulating effects of one or more additional sound sources to simulate an effect of a plurality of sound sources at selected locations about the listener. hi one elnbodiment, the one or more digital signals include left and rigllt digital signals to be output to left and right speakers a.nd the plurality of sound sources include more than two sotuld sources such that effects of more tha.n two sound sources are simulated with the left and right speakers. hi one enlbodiment, the plurality of sound sottrces include five sotuld sources arranged in a inainler similar to one of suiTound sound arrangements, aaid wherein the left and right speakers are positioned in a headphone, such that surround sotuid effects are sinnilated by the left aiid right filtered signals provided to the headphone.
[0021] Asiother embodiment of the present disclosure relates to a positional audio engine for processing digital signal representative of a sound from a sound source.
The audio engine includes a filter selection component configured to select one or more digital filters, with each of the one or more digital filters being forzned from a particular range of a hearing response function, the selection based on spatial position of the sound source relative to a listener. The audio engine ftuther includes a filter application coinponent configured to apply the one or more digital filters to one or inore digital signals so as to yield corresponding one or more filtered signals, with each of the one or more filtered signals having a simulated effect of the hearing response fiuiction applied to the sound from the sound source.
[0022] In one einbodiinent, the hearing response fiuiction incltides a head-related transfer fiinction (HRTF). fii one einbodimelit, the particular range includes a particular range of frequency within the HRTF. In one embodiment, the particular range of fiequency is substantially within or overlaps with a range of frequency that provides a location-discriminating sensitivity to an average human's hearing that is greater than an average sensitivity among an audible frequency. In one einbodimelit, the particular range of frequency includes or substantially overlaps with a peak structure in the HRTF. Iu one einbodiment, the peak structure is substantially within or overlaps with a range of frequency between about 2.5 KHz and about 7.5 KHz. Iti one embodiment, the peak structure is substantially within or overlaps with a range of frequency between about 8.5 KHz and about 18 KHz.
[0023] In one einbodiment, the one or more digital signals include left and rigllt digital signals such that the one or more filtered signals include left and right filtered signals to be output to left aizd right spealcers.
[0024] Iti one einbodimelit, the one or more digital filters include a ph.uality,of digital filters. Iii one einbodimelit, each of the one or more digital signals is split into the same nuinber of signals as the nuznber of the plurality of digital filters such that the plurality of digital filters are applied in parallel to the phuality of split signals. hi one embodiment, the each of one or more filtered signals is obtained by conibining the plurality of split signals filtered by the plurality of digital filters. In one einbodimelit, tlie coinbining includes summing of the pltirality of split signals.
[0025] In one embodiment, the plurality of digital filters include first and second digital filters. In one embodiment, each of the first and second digital filters includes a filter that yields a response that is substantially maximally flat in a passband poi-tion aud rolls off towards substantially zero in a stopband poi-tion of the hearing response ftuzction. In one einbodiment, each of the first and second digital filters includes a Butterwortli filter.- Iz1 one embodiment, the passband portion for one of the first and second digital filters is defined by a frequency range between about 2.5 KHz and about 7.5 KHz. Iii one einbodiznent, the passband portion for one of the first and second digital filters is defined by a frequency range between about 8.5 KHz and about 18 KHz.
[0026] In one einbodin2ent, the selection of the one or more digital filters is based on a finite number of geometric positions about the listener. h1 one embodiment, the geometric positions include a plurality of heini-planes, each heini-plane defined by an edge along a direction between the ears of the listener and by an elevation angle ep relative to a horizontal plane defined by the ears and the front direction for the listener. In one embodiment, the plurality of heini-planes are grouped into one or more front hemi-planes and one or more rear hemi-planes. hi one einbodiinent, the fiont heini-planes include heini-planes at front of the listener and at elevation angles of approximately 0 and +/- 45 degrees, and the rear hemi-planes include hezni-planes at rear of the listener and at elevation angles of approximately 0 and +/- 45 degrees.
[0027] In one elnbodinlent, the application of the one or more digital filters to the one or more digital signals siinulates an effect of motion of the sotuid source about the Iistener.
[0028] Til one embodiinent, the application of the one or more digital filters to the one or more digital signals simulates an effect of placing the sound source at a selected location about the listener.
[0029] Yet another embodimerit of the present disclosure relates to a system for processing digital audio signals. The system includes an iriteraural tiine difference (ITD) component configured to receive a mono input signal and generate left and right ITD-adjusted signals to silnulate an arrival time difference of sotu-id arriving at left and right ears of a listener from a sound sotirce. The mono input signal includes information about spatial position of the sound source relative the listener. The system further includes a positional filter component configured to receive the left and rigllt ITD-adjusted sigiials, apply one or more digital filters to eacll of the left and right ITD-adjusted signals to generate left a.nd riglit filtered digital signals, with each of the one or more digital filters being based on a particular range of a hearing response fiulction, such that the left and rigllt filtered digital signals simulate the hearing response fiinction. The systein fiirtlier includes aii interatiral intensity difference (TID) component configured to receive the left and right filtered digital signals and generate left and rigllt HD-adjusted signal to simulate an intensity difference of the sound arriving at the left and rigllt ears.
[0030] Iii one einbodiinent, the hearing response fiinction includes a head-related transfer function (HRTF). Ti1 one embodiment, the particular range includes a particular range of frequency within the HRTF. In one einbodiment, the particular range of frequency is substantially within or overlaps with a range of frequency that provides a location-discriminating sensitivity to an average lnunan's hearing that is greater than an average sensitivity among an audible fiequency. In one einbodiment, the particular range of frequency includes or substantially overlaps witli a peak structure in the HRTF. In one embodiinent, the peak structure is substantially within or overlaps with a range of frequency between about 2.5 KHz and about 7.5 KHz. In one eznbodiment, the peak stracttue is substantially within or overlaps with a range of fiequency between about 8.5 KHz and abotit 18 KHz. ~
[0031] Iti one einbodiinent, the ITD includes a quantity that is proportional to absolute value of sinA cosep, wliere 0 represents an azimuthal angle of the sound sotirce relative to the front of the Iistener, and cp represents an elevation angle of the sound source relative to a horizontal plane defined by the listener's ears and the front direction.
[0032] h-i one eznbodiment, the ITD determination is perfoi7ned wllen the spatial position of the sound source cha.nges. In one embodiinent, the ITD
coinponent is fiu-ther configtired to perfonn a crossfade transition of the ITD between the previous value aiid the cui7ent value. In one embodiinent, the crossfade transition includes changing the ITD froni the previous value to the current value during a plurality of processing cycles.
[0033] Ix1 one embodiment, the ITD component is configured to deterznine whether the sound source is positioned at left or right relative to the listener. The ITD
component is fi.irther configured to assign as a weaker signal the left or riglit filtered signal that is on the opposite side as the sotlnd source. The ITD coznponent is fiirtlier configured to assign as a stronger signal the other of the left or right filtered signal. The ITD component is fiuther configured to adjust the wealcer signal by a first compensation.

The ITD component is fiirther configured to adjust the stronger signal by a second compensation.
[0034] In one embodiment, the first compensation includes a compensation value that is proportional to cos8, where 0 represents an azimuthal angle of the sound source relative to the front of the listener. h-i one embodiment, the second compensation includes a coinpensation value that is proportional to sinO, wliere 0 represents e.n azimuthal angle of the sound source relative to the front of the listener.
[0035] hi one embodiment, the adjustment of the left and right filtered signals for IID is perfonned when new one or more digital filters are applied to the left and right filtered signals due to selected movements of the sound source. In one eznbodiinent, the ITD component is further configured to perforin a crossfade transition of the first and second compensation values between the previous values and the current values.
In one einbodiment, the crossfade transition includes changing the first and second compensation values during a plurality of processing cycles.
[0036] In one embodiment, tlie one or more digital filters include a plurality of digital filters. h-i one embodiinent, each of the one or more digital signals is split into the saine nuinber of signals as the nlunber of the plurality of digital filters such that the plurality of digital filters are applied in parallel to the plurality of split signals. In one embodiment, the each of the left and riglzt filtered digital signals is obtained by combining the plurality of split signals filtered by the plurality of digital filters.
Iii one embodiment, the combining includes sumnaing of the plurality of split signals.
[0037] Izi one embodilnent, the plurality of digital filters include first and second digital filters. In one einbodiment, each of the first and second digital filters includes a filter that yields a response that is substantially maxiinally flat in a passband portion an.d rolls off towards substantially zero in a stopband portion of the hearing response fiinction. h-i one embodiinent, each of the first and second digital filters includes a Butterwortll filter. In one embodiment, the passband portion for one of the first and second digital filters is defined by a frequency range between about 2.5 K_Hz and about 7.5 K_Hz. In one embodiment, the passband portion for one of the first and second digital filters is defined by a frequency range between about 8.5 1,,Hz and about 18 K
I-Iz.
[0038] h-i one embodiment, the positional filter coinponent is fi.irther configured to select the one or more digital filters based on a finite nuniber of geometric positions about the listener. h-i one embodiment, the geometric positions include a plurality of hemi-planes, each liemi-plane defined by an edge along a direction between the ears of the listener and by an elevation angle cp relative to a horizontal plane defined by the ears and the front direction for the listener. hl one embodiment, the ph.irality of lieini-planes are grouped into one or more front heini-planes and one or more rear hemi-planes. Iil one embodiinent, the front hemi-planes include hemi-planes at front of the listener and at elevation angles of approximately 0 and +/- 45 degrees, and the rear hemi-planes inchYde hemi-planes at rear of the listener and at elevation angles of approxiinately 0 and +/- 45 degrees.
[0039] Ii1 one embodiment, the systenl fi.irther includes at least one of the following: a sample rate conversion cornponent, a Doppler adjustinent coinponent configured to simulate sound source velocity, a distance adjustment component configured to account for distance of the sound source to the listener, an orientation adjustment component configured to account for orientation of the listener's head relative to the sound source, or a reverberation adjustment coinponent to simulate reverberation effect.
[0040] Yet another embodiment of the present disclosure relates to a system for processing digital audio signals. The system includes a plurality of signal processing chains, with each chain including an interaural time difference (ITD) coinponent configured to receive a mono iulput signal and generate left and right ITD-adjusted signals to simulate an arrival time difference of sound aiTiving at left and riglit ears of a listener from a sound source. The inono input signal includes information about spatial position of the sotuld source relative the listener. Each chain fiu-ther includes a positional filter component configured to receive the left and riglit ITD-adjusted signals, apply one or more digital filters to each of the left and right ITD-adjusted sigiials to generate left and right filtered digital signals, with eacll of the one or inore digital filters being based on a particular range of a hearing response function, such that the left and rigllt filtered digital signals simulate the hearing response fuilction. Each chain further includes an interaural intensity difference (IID) component configured to receive the left and rigllt filtered digital signals and generate left and right IID-adjusted signal to simulate an intensity difference of the sound arriving at the left and right ears.
[0041] Yet another embodiment of the present disclostue relates to an apparattis having a ineans receiving one or znore digital signals. The apparatus further includes a ineans for selecting one or more digital filters based on infonnation about spatial position of a sound source. The apparatus fiuther inch.ides a ineans for applying the one or more filters to the one or inore digital signals so as to yield corresponding one or more filtered signals that simulate an effect of a hearing respoiise fiuiction.
[0042] Yet aiiother einbodiinent of the present disclosure relates to an apparatus having a nieans for fornning one or more electronic filters, and a zneans for applying the one or more electronic filters to a sound siglial so as to simulate a tl-iree-dimensional sound effect.

BRIEF DESCRIPTION OF THE DRAWINGS
[0043] Figure 1shows an exaniple listening situation where a positional audio engine can provide sound effect of moving sound source(s) to a listener;
[0044] Figure 2 sliows another exaznple listening situation where the positional audio engine can provide a surround sound effect to a listener using a headphone;
[0045] Figure 3 shows a block diagranl of an overall functionality of the positional audio engine;
[0046] Figure 4 shows one einbodiment of a process that can be perfoizned by the positional audio engine of Figure 3;
[0047] Figure 5 shows one embodinient of a process that can be a inore specific exalnple of the process of Figure 4;
[0048] Figure 6 shows one embodunent of a process that caii be a iiiore specific example of the process of Figure 5;
[0049] Figure 7A shows, by way of exarnple, how one or more location-critical infoimation from response curves can be converted to relatively simple filter responses;
[0050] Figure 7B shows one enibodiment of a process that can provide the exainple conversion of Figure 7A;
[0051] Figure 8 shows an example spatial geometry defulition for the purpose of description;
[0052] Figttre 9 shows an exaniple spatial configuration where space about a listener can be divided into four quadrants;
[0053] Figtue 10 shows an example spatial configuration wllere sotuid sources in the spatial coiifiguration of Figtue 9 can be approximated as being positioned on a plurality of discrete hemi-planes about the X-axis, tliereby simplifying the positional filtering process;
[0054] Figures 11A - 11 C show example response cui-ves such as HRTFs that can be obtained at various exainple locations on some of the hezni-planes of Figure 10, such that position-critical siinulated filter responses can be obtained for various heini-planes;
[0055] Figure 12 shows that in one einbodiment, positional filters can provide position-critical siinulated filter responses, and can operate with an interaural time difference (ITD) interaural intensity difference (IID) functionalities;
[0056] Figure 13 shows one enlbodiment of the ITD component of Figure 12;
[0057] Figure 14 shows one embodiinent of the positional filters component of Figure 12;
[0058] Figure 15 shows one embodiment of the IID coinponent of Figure 12;
[0059] Figure 16 shows one embodiznent of a process that can be performed by the ITD coinponent of Figure 12;
[0060] Figure 17 shows one einbodiment of a process that can be perfortned by the positional filters and IID components of Figure 12;
[0061] Figure 18 shows one einbodiment of a process that can be performed to provide the fiinctionalities of the ITD, positional filters, alld 17D
coinponents of Figure 12, where crossfading fiuictioiialities can provide smooth transition of the effects of sound sources that move;
[0062] Figure 19 shows an example signal processing configuration where the positional filters coinponent caii be part of a chain with other sound processing components;
[0063] Figure 20 shows that in one einbodiment, a plurality of signal processing chains can be iinpleznented to sinitilate a pltuality of sotuzd sources;
[0064] Figure 21 shows another variation to the einbodinient of Figure 20;
[0065] Figures 22A and 22B show zion-limiting exanlples of audio systems where the positional audio engine having positional filters can be implemented; and [0066] Figures 23A and 23B show non-liiniting exanlples of devices where the fitnctionalities of the positional filters can be inlplemented to provide enhanced listening experience to a listener.
[0067] These and otller aspects, advantages, and novel features of the present teachings will become apparent upon reading the following detailed description and upon reference to the acconlpanying drawings. In the drawings, similar elements have similar reference nunlerals.

DETAILED DESCRIPTION OF SOME EMBODIMENTS
[0068] The present disclosure generally relates to audio signal processing teclulology. h.l some einbodinlents, various features and tecluliques of the present disclosure can be inlplemented on audio or audio/visual devices. As described herein, various features of tlle present disclosure allow efficient processing of sound signals, so that in some applications, realistic positional sound imaging can be achieved even with liinited signal processing resources. As such, in some einbodiinents, soLuid having realistic iinpact on the listener can be output by portable devices such as handheld devices wllere conlputing power may be limited. It will be understood that various features and concepts disclosed herein are not limited to iinpleinentations in portable devices, but can be iinplemented in any electronic devices tllat process soLU1d signals.
[0069] Figure 1 shows an exainple situation 100 where a listener 102 is shown to listen to sound 110 from speakers 108. The listener 102 is depicted as perceiving one or more sound sources 112 as being at certain locations relative to the listener 102. The example sound source 112a "appears" to be in front and right of tlle listener 102; an.d the exainple sound source 112b appears to be at rear and left of the listener. The sound source 112a is also depicted as being moving (indicated as arrow 114) relative to the listener 102.
[0070] As also sllown in Figure 1, some sounds can nlalce it appear that the listener 102 is nloving with respect to some sound source. Many other combinations of sound-source aiid listener orientation and motion can be effectuated. It1 some einbodiinents, such audio perception conibined with corresponding visual perception (fiom a screen, for example) can provide an effective and powerfiil sensory effect to the listener.
[0071] h7 one embodiinent, a positional audio engine 104 can generate and provide s'ignal 106 to the speakers 108 to achieve such a listening effect.
Various einbodiments and features of the positioiial audio engine 104 are described below in greater detail.
[0072] Figt.tre 2 shows another exainple situation 120 where the listener 102 is listeniiig to sound from a two-spealcer device such as a headphone 124. Again, the positional audio engine 104 is depicted as generating and providing signal 122 to the exaniple headphone. In this exaniple iinplementation, sounds perceived by the listener 102 malce it appear that there are nlultiple sound sources at substantially fixed locations relative to the listener 102. For example, a surround sound effect can be created by making sound sources 126 (five in this example, but other nuinbers and configLxrations are possible also) appear to be positioned at cei-tain locations.
[0073] In soine embodiments, such audio perception combined with corresponding visual perceptiozi (from a screen, for exaznple) can provide an effective and powerful sensory effect to the listener. Thus, for example, a surround-sound effect can be created for a listener listening to a handlleld device through a headphone.
Various embodiments and features of the positional audio engine 104 are described below in greater detail.
[0074] Figure 3 shows a block diagrain of a positional audio engine 130 that receives an input signal 132 and generates an output signal 134. Such signal processing with features as described herein can be iinplemented in nuinerous ways. Tii a non-limiting exainple, some or all of the ftulctionalities of the positional audio engine 130 can be impleinented as an application progranuning interface (API) between an operating systeni and a multiinedia application in an electronic device. In another non-limiting exainple, some or all of the functionalities of the engine 130 can be incorporated into the source data (for exaniple, in the data file or streatning data).
[0075] Otlier configurations are possible. For exainple, various concepts and features of the present disclosure can be implemented for processing of signals in analog systeins. In such systeins, analog equivalents of positional filters can be configured based on location-critical inforination in a maiuler similar to the va"rious techniques described herein. Tlius, it will be understood that various concepts and features of the present disclosure are not limited to digital systems.
[0076] Figure 4 shows one embodiinent of a process 140 that can be perforined by the positioiial audio engine 130. In a process block 142, selected positional response infonnation is obtained aznoilg a given frequency range. .Iii one embodiment, the given range can be an audible frequency range (for example, from about 20 Hz to about 20 E-Hz). In a process block 144, audio signal is processed based on the selected positional response infoimation.
[0077] Figure 5 shows on.e embodiinent of a process 150 where the selected positional response infornnatioii of the process 140 (Figure 4) can be a location-critical or location-relevant infonnation. hi a process block 152, location-critical infornlation , is obtained from frequency response data. In a process block 154, locations or one or more sound sources are detennined based on the location-critical infoilnation.
[0078] Figure 6 shows one enlbodiinent of a process 160 where a more specific implelnentation of the process 150 (Figure 5) can be perforined. Iii a process block 162, a discrete set of filter paratneters are obtained, where the filter paraineters can siinulate one or more location-critical portions of one or more HRTFs (Head-Related Transfer Functions). In one einbodiment, the filter parameters can be filter coefficients for digital signal filtering. In a process block 164, locations of one or more sound sources are deterinined based on filtering using the filter paraineters.
[0079] For the purpose of description, "location-critical" means a portion of llunian hearing response spectrw.n (for example, a frequency response spectium) where sound source location discriinination is fotuid to be particularly acute. HRTF
is an example of a human hearing response spectrum. Studies (for exanlple, "A
comparison of spectral coiTelation and local feature-matching models of piiuia cue processing" by E. A.
Macperson, Jourizal of tlae Acoustical Society of Anierica, 101, 3105, 1997) have shown that hnnian listeners generally do not process entire HRTF inforniation to distinguish where sound is coming from. Ilistead, t11ey appear to focus on certain features in HRTFs.
For example, local feature matches and gradient correlations in frequencies over 4 ICHz appear to be pa.rticularly important for sound direction discrimination, while other, portions of HRTFs are generally ignored.
[0080] Figure 7A shows example HRTFs 170 corresponding to left and right ears' hearing responses to an exaniple sound source positioned in front at about 45 degrees to the right (at about the ear level). Iii one embodiinent, two pealc sti-uctures indicated by aiTows 172 and 174, and related structtues (such as the valley between the pealcs 172 and 174) can be considered to be location-critical for the Ieft ear hearin.g of the exanzple soLuzd sotuce orientation. Siinilarly, two peak stititctures indicated by arrows 176 and 178, and related structures (such as the valley betweeil the peaks 176 and 178) can be considered to be location-critical for the right ear hearing of the exalnple sound source orientation.
[0081]" Figure 7B shows one enzbodiinent of process 190 that, in a process block 192, can identify one or more location-critical frequencies (or frequency ranges) from response data such as the exainple HRTFs 170 of Figtue 7A. Iiz the exainple HRTFs 170, two example frequencies are indicated by the aiTows 172, 174, 176, and 178. lii a process block 194, filter coefficients that simulate the one or more such location-critical frequency responses can be obtained. As described herein, and as shown in a process block 196, such filter coefficients can be used subsequently to simulate the response of the exainple sound source orientation that generated the HRTFs 170.
[0082] Siinulated filter responses 180 eozTesponding to the HRTFs 170 ca.n result from the filter coefficients detennined in the process block 194. As shown, peaks 186, 188, 182, and 184 (and the coi7esponding valleys) are replicated so as to provide location-critical responses for location discrimination of the sound source.
Other portions of the HRTFs 170 are shown to be generally ignored, thereby represented as substantially flat responses at lower frequencies.
[0083] Because only certain portion(s) and/or structure(s) are selected (in this exainple, tlie two pealfs and related valley), formation of filter responses (for exainple, detennination of the filter coefficients that yields the example simulated responses 180) can be sinzplified greatly. Moreover, such filter coefficients can be stored and used subsequently in a greatly siinplified nzaruler, thereby substantially reducing the coznputing power required to effectuate realistic location-discriininating sound output to a listener.
Specific exainples of filter coefficient detennination and subsequent use are described below in greater detail.
[0084] In the description llerein, filter coefficient determination aiid subsequent use are described in the coiitext of the exainple two-pealc selection. It will be 'Liilderstood, however, that in some einbodiments, other portion(s) and/or feature(s) of HRTFs can be identified and siznulated. So for exaniple, if a given HRTF has three peaks that can be location-critical, those three peaks can be identifled and simulated.

Accordingly, tluee filters can represent those three peaks instead of two filters for the two pealcs.
[0085] In one embodimen.t, the selected features and/or raiiges of the HRTFs (or other frequency response curves) can be simulated by obtaining filter coefficients that generate an approxinZated response of the desired features and/or ranges. Such filter coefficients can be obtained using any number of 1uiown tecluliques.
[0086] Iii one embodinient, simplification that can be provided by the selected features (for example, peaks) allows use of sinlplified filtering teciuiiques.
Ii1 one embodiment, fast and simple filtering, such as infinite iinpulse response (IIR), can be utilized to silnulate the response of a limited nuinber of selected location-critical features.
[0087] By way of example, the two example peaks (172 and 174 for the left hearing, and 176 and 178 for the right hearing) of the exainple HRTFs 170 can be simulated using a 1clZown Butterworth filtering tecluzique. Coefficients for such la-lown filters can be obtained using any lciown tecluiiqites, including, for example, signal processing applications such as MATLAB. Table 1 shows examples of MATLAB
fiuiction calls that can return simulated responses of the exainple HRTFs 170.

Peak Gain MATLAB filter function call Butter(Order, Noilnalized range, Filter type) Pealc 172 (Left) 2 dB Order =1 Range = [2700/(SainplingRate/2),6000/(SamplingRate/2)]
Filter type = 'band ass' Pealc 174 (Left) 2 dB Order =1 Range =
[ 11000/(SainplingRate/2),14000/(S amplingRate/2)]
Filter type = 'bandpass' Pealc 176 3 dB Order = 1 (Riglit) Range = [2600/(SamplingRate/2),6000/(SamplingRate/2)]
Filter type = 'bandpass' Pealc 178 11 dB Order = 1 (Rig11t) Range [ 12000/(SainplingRate/2),16000/(SamplingRate/2)]
Filter type = 'band ass' [0088] In one enlbodiment, the foregoing exainple IIR filter responses to the selected peaks of the example HRTFs 170 can yield the simulated responses 180.
The corresponding filter coefficients can be stored for subsequent use, as indicated in the process block 196 of the process 190.
[0089] As previously stated, the exainple HRTFs 170 and siunulated responses 180 correspond to a sound source located at front at aboiit 45 degrees to the right (at about the ear level). Response(s) to other source location(s) can be obtained in a similar mamler to provide a two or three-dimensional response coverage about the listener.
Specific filtering examples for other sound source locations are described below in greater detail.
[0090] Figure 8 shows an example spatial coordinate definition 200 for the purpose of description herein. The listener 102 is asstuned to be positioned at the origin.
The Y-axis is considered to be the front to which the listener 102 faces.
Thus, the X-Y
plane represents the horizontal plane with respect to the listener 102. A
sound source 202 is shown to be located at a distance "R" from the origin. The angle ep represents the elevation angle from the horizontal plane, and the angle 0 represents the aziunuthal angle from the Y-axis. Thus, for example, a sound source located directly behind the listener's head would have 0 = 180 degrees, and cp = 0 degree.
[0091] In one embodiment, as shown in Figure 9, space about the listener (at the origin) can be divided into front and rear, as well as left and right. hi one einbodiment, a front helni-plane 210 and a rear heini-plane 212 can be defined, such that together they define a plane having an elevation angle cp and intersects the X-Y plane at the X-axis. Thus, for example, the example sound source at 0 = 45 and cp = 0, and correspondin.g to the exasnple HRTFs 170 of Figure 7A, is in the Front-Right (FR) section and in the front hemi-plane at cp = 0.
[0092] In one einbodiment, as described below in greater detail, various heini-planes can be above and/or below the horizontal to account for sound sources above and/or below the ear level. For a given henii-plane, a response obtained for one side (e.g., right side) can be used to estimate the response at the mirror image location (about the Y-Z plane) on the other side (e.g., left side) by way of syininetiy of the listener's head. In one enlbodiment, because such syminetry does not exist for front and rear, separate responses can be obtained for the front and rear (and thus the front and rear hemi-planes).
[0093] Figure 10 shows that in one embodiment, the space around the listener (at the origin) can be divided into a plurality of fiont and rear hemi-planes.
In one einbodiment, a front hemi-plane 362 can be at a horizontal orientation ((p =
0), and the coiTesponding rear hemi-plane 364 would also be substantially horizontal. A
fiont henii-plane 366 can be at a front-elevated orientation of about 45 degrees ((p = 45 ), and the corresponding rear hemi-plane 368 would be at about 45 degrees below the rear heini-plane 364. A front hemi-plane 370 can be at an orientation of about -45 degrees ((p =-45 ), and the corresponding rear heini-plane 372 would be at abotit 45 degrees above the rear heini-plane 364.
[0094] In one embodiment, sound sources about the listener can be approximated as being on one of the foregoing hemi-planes. Each hemi-plane can have a set of filter coefficients that simulate response of sound sources on that heini-plane. Tlzus, the example simulated response described above in reference to Figure 7A can provide a set of filter coefficients for the fiont horizontal heini-plane 362.
Siinulated responses to sound sources located anywhere on the front horizontal henii-plane 362 can be approximated by adjusting relative gains of the left and right responses to account for left and right displaceinents from the front direction (Y-axis). Moreover, other paraineters such as sound sottrce distailce and/or velocity can also be approximated in a maiuier described below.
[0095] Figures IlA - 11C show some examples of simulated responses to various coiTesponding HRTFs (not shown) that can be obtained in a inaiuler similar to that described above. Figure 11A shows an example siinulated response 380 obtained from location-critical portions of HRTFs corresponding to 0= 270 and cp =+45 (directly left for the front elevated heini-plane 366). Figure 11B shows a17 exan7ple simulated response 382 obtained from location-critical portions of HRTFs corresponding to 0= 270 and cp = 0 (directly left for the horizontal hemi-plane 362).
Figttre 11C
shows an exainple simulated response 384 obtained from location-critical portions of HRTFs coiTesponding to 0= 270 and cp =-45 (directly left for the front lowered heini-plane 370). Similar simulated responses can be obtained for the rear hemi-planes 372, 364, and 368. Moreover, such simulated responses can be obtained at various values of 0.
[0096] Note that in the example simulated response 384, a bandstop Butterworth filtering can be used to obtain a desired approximation of the identified features. Tlius, it should be understood that various types of filtering teclu-iiques can be used to obtain desired results. Moreover, filters other than Butterworth filters can be used to acliieve similar results. Moreover, although IIR filter are used to provide fast and simple filtering, at least some of the tecluliques of the present disclostu-e can also be iinplemented tising other filters (such as finite iinpulse response (FIR) filters).
[0097] For the foregoing exainple hemi-plaiie configtuation ((P =+45 , 00, -450), Table 2 lists filtering paraineters that can be inptit to obtain filter coefficients for the six hemi-planes (366, 362, 370, 372, 364, and 368). For the exainple paraineters in Table 2 (as in Table 1), the exainple Butterworth filter ftulction call can be made in MATLAB
as:

"butter(Order, [fco,,/(SamplingRate/2),f-ligEl/(Sa7nplingRate/2), Type)"

where Order represents the highest order of filter tei7ns, fLd,, and fHtgh represent the boundary values of the selected frequency range, and SarnplingRczte represeiits the sampling rate, and Type represents the filter type, for each given filter.
Other values and/or types for filter parameters are also possible.

Hemi-plane Filter Gain Order Frequency Type (dB) Range (fL ,,, fi-iidi) (KHz) Front, cp =+0 Left #1 2 1 2.7,6.0 bandpass Front, (p =+0 Left #2 2 1 11, 14 bauidpass Front, cp =+0 Right #1 3 1 2.6, 6.0 bandpass Front, cp =+0 Right #2 11 1 12, 16 bandpass Front, (p = Left #1 -4 1 2.5,6.0 bandpass +45 Front, cp = Left #2 -1 1 13, 18 bandpass +450 Front, cp = Right #1 9 1 2.5, 7.5 bandpass +45 Front, (p = Right #2 6 1 11, 16 bandpass +45 Front, (p =-45 Left #1 -15 1 5.0, 7.0 bandstop Front, (p =-45 Left #2 -11 1 10, 13 bandstop Front, (p =-45 Right #1 -3 1 5.0, 7.0 baiidstop Front, cp =-45 Right #2 3 1 10, 13 bandstop Rear, cp =+0 Left #1 6 1 3.5,5.2 bandpass Rear, cp =+0 Left #2 1 1 9.5, 12 bandpass Rear, (p =+0 Right #1 13 1 3.3, 5.1 bandpass Rear, cp'= +0 Rig11t #2 6 1 10, 14 bandpass Rear, cp =+45 Left #1 6 1 2.5,7.0 bandpass Rear, (p =+45 Left #2 1 1 11, 16 bandpass Rear, (p =+45 Rigllt #1 13 1 2.5, 7.0 bandpass Rear, (p =+45 Right #2 6 1 12, 15 bandpass Rear, cp =-45 Left #1 6 1 5.0, 7.0 bandstop Rear, (p =-45 Left #2 1 1 10, 12 bandstop Rear, (p =-45 Riglit #1 13 1 5.0, 7.0 bandstop Rear, <p =-45 Rigllt #2 6 1 8.5, 11 bandstop [0098] In one embodiinent, as seen in Table 2, each heini-plane can have four sets of filter coefficients: two filters for the two exainple location-critical pealcs, for each of left and right. Tlius, wit11 six hemi-planes, there can be 24 filters.
[0099] In one embodiment, saine filter coefficients can be used to siinulate responses to sound from sources anywhere on a given hezni-plane. As described below in greater detail, effects due to left-rigllt displacement, distance, and/or velocity of the soLUce can be accounted for and adjusted. If a soitrce moves froin one hezni-plane to anotlier hemi-plane, transition of filter coefficients can be implemented, in a manner described below, so as to provide a smooth transition in the perceived sound.
[0100] In one embodiment, if a given sound source is located at a location somewhere between two hemi-planes (for example, the source is at front, (p =+30 ), then the source can be considered to be at the "nearest" plane (for example, the nearest heini-pla.ne would be the front, c) =+45 ). As one can see, it may be desirable in certain situations to provide more or less hemi-planes in space about the listener, so as to provide less or more "granularity" in distribution of heini-planes.
[0101] Moreover, the tliree-dimeiasional space does not necessarily need to be divided into hemi-planes about the X-axis. The space could be divided into any one, two, or three dimensional geoznetries relative to a listener. In one einbodiment, as done in the hemi-planes about the X-axis, syinmetries such as left and iight hearings can be utilized to reduce the nuinber of sets of filter coefficients.
[0102] It will be understood that the six hemi-plane configuration ((p =+45 , 0 , -45 ) described above is an example of how selected location-critical response inforination can be provided for a limited nuinber of orientations relative to a listener. By doing so, substantially realistic three-dimensional sound effects can be reproduced using relatively little conzputing power and/or resoLUces. Even if the nunzber of heini-planes are increased for fi.ner granularity - say to ten (front and rear at c) =+60 , +30 , 0 , -30 , -60 ) - the ni.unber of sets of filter coefficients can be niaintained at a manageable level.
[0103] Figure 12 shows one embodiment of a fiuictional block diagrain 220 where positional filtering 226 can provide fi.uictionalities of the positional audio engine by simulation of the location-critical infonnation as described above. In one embodiment, a mono input signal 222 having infonnation about location of a sound sotuce can be input to a conlponent 224 that detennines an interaural time delay (or difference) ("ITD"). ITD
can provide information about the difference in arrival times to the two ears based on the sotirce's location information. Aii example of ITD ftinctionality is described below in greater detail.
[0104] In one einbodiinent, the ITD coinponent 224 can output left and right signals that talce into account the arrival difference, and such output signals can be provided to the positional-filters coinponent 226. An example operation of the positional-filters component 226 is described below in greater detail.
[0105] lii one embodiment, the positional-filters component 226 can outptit left and riglit signals that have been adjusted for the location-critical responses. Such otttput signals can be provided into a component 228 that deterinines an interaural intensity difference ("IID"). IID can provide adjustlnents of the positional-filters outputs to adjust for position-dependence in the intensities of the left and right signals. Aii example of IID coinpensation is described below in greater detail. Left and right signals 230 can be output by the I1D component 228 to speakers to provide positional effect of the sound source.
[0106] Figure 13 shows a bloclc diagrani of one embodiment of an ITD 240 that can be implemented as the ITD component 224 of Figure 12. As shown, an input signal 242 can include information about the location of a sound sotuce at a given saznpling time. Such location can include the values of 0 and cp of the sotuzd source.
[0107] The input signal 242 is shown to be provided to an ITD calculation coinponent 244 that calculates interaural time delay needed to simulate different arrival times (if the source is located to one side) at the left and right ears. hi one einbodinient, the ITD can be calculated as ITD = I(Maximtun ITDSainples_per_Sampling_Rate - 1) sinA coscpl. (1) Thus, as expected, ITD = 0 when a source is either directly in front (0 = 0 ) or directly at rear (0 = 180 ); and ITD has a znaxinium value (for a given value of (p) when the source is either directly to the left (0 = 270 ) or to the right (0 = 90 ). Similarly, ITD has a maximum value (for a given value of 0) when the sotuce is at the horizoiital plane ((p =
0 ), and zero when the source is either at top ((p = 90 ) or bottoin ((p =-90 ) locations.
[0108] The ITD detennined in the foregoing maiuler can be introduced to the input signal 242 so as to yield left and right signals that are ITD adjusted.
For exainple, if the source location is on the right side, the right signal ean have the ITD
subtracted froin the timing of the sound in the input signal. Similarly, the left signal can have the ITD
added to the tiining of the sound in the input signal. Such timing adjustments to yield left and right signals can be achieved in a known maiuler, and are depicted as left and right delay lines 246a and 246b.
[0109] If a sound source is substantially stationaiy relative to the listener, the sanle ITD can provide the arrival-time based three-dimensional sound effect.
If a sound source moves, however, the ITD may also change. If a new value of ITD is incorporated into the delay lines, there may be a sudden change from the previous ITD based delays, possibly resulting in a detectable shift in the perception of ITDs.
[0110] In one embodiment, as shown in Figure 13, the ITD coinponent 240 can fiu-ther include crossfade components 250a and 250b that provide smoother transitions to new delay times for the left and right delay lines 246a and 246b. An example of ITD crossfade operation is described below in greater detail.
[0111] As shown in Figure 13, left and right delay adjusted signals 248 are shown to be output by the ITD component 240. As described above, the delay adjusted signals 248 may or may not be crossfaded. For exainple, if the source is stationary, there may not be a need to crossfade, since the ITD remains substantially the saine.
If the source moves, crossfading may be desired to reduce or substantially eliniinate sudden shifts in ITDs due to changes in source locations.
[0112] Figure 14 shows a block diagram of one embodiment of a positional-filters component 260 that can be implemented as the coinponent 226 of Figure 12. As shown, left and right signals 262 are shown to be input to the positional-filters component 260. In one einbodiment, the input signals 262 can be provided by the ITD
component 240 of Figure 13. However, it will be iulderstood that various features and concepts related to filter preparation (e.g., filter coefficient determination based on location-critical response) and/or filter use do not necessarily depend on having input signals provided by the ITD coinponent 240. For exainple, an input signal from a source data may already have left/right differentiated inforination and/or ITD-differentiated infonnation. In such a situation, the positional-filters component 260 can operate as a substantially stand-alone component to provide a functionality that includes providing frequency response of sound based on selected location-critical information.
[0113] As shown in Figure 14, the left and right input signals 262 can be provided to a filter selection component 264. Iii one embodiment, filter selection can be based on the values of 0 and cp associated with the sound sotuce. For the six-heini-plane example described herein, 0 and cp can uniquely associate the sound source location to one of the hemi-planes. As described above, if a sound source is not on one of tlie heini-planes, that sotirce can be associated with the "nearest" heini-pla.ne.
[0114] For example, suppose that a sotuld source is located at 0= 10 and cp =
+100. Iii such a situation, the front horizontal heini-plane (362 in Figure 10) can be selected, since the location is in front and the horizontal orientation is the nearest to the 10-degree elevation. The fiont horizontal hemi-plane 362 can have a set of ftlter coefficients as detennined in the example maiiner shown in Table 2. Thus, four example filters (2 left and 2 right) corresponding to the "Front, <p =+0 " hemi-plane can be selected for this example source location.
[0115] As shown in Figtue 14, left filters 266a and 268a (identified by the selection coinponent 264) can be applied to the left signal, and right filters 266b and 268b (also identified by tlie selection component 264) ca.n be applied to the rigllt signal. In one einbodinient, each of the filters 266a, 268a, 266b, and 268b operate on digital signals in a 1u1own nlailner based on their respective filter coefficients.
[0116] As described herein, the two left filters and two right filters are in the context of the two example location-critical peatcs. It will be understood that otller nuinbers of filters are possible. For exainple, if there are three location-critical features and/or ranges in the frequency responses, there may be three filters for each of the left aia.d riglit sides.
[0117] As shown in Figure 14, a left gain coinponent 270a can adjust the gain of the left signal, and a right gain coinponent 270b can adjust the gain of the rigllt signal.
hi one einbodiinent, the following gains corresponding to the parameters of Table 12 can be applied to the left and right signals:

0 deg. Elevation 45 deg. Elevation -45 deg. Elevation Left Gain -4 dB -4 dB -20 dB

WO 2007/033150 PCT/US2006/035446 ---T [RKil 7G7a7in F 2 dB -1 dB -5 dB

In one embodiment, the exainple gain values listed in Table 3 can be assigned to substantially maintain a correct level difference between left and riglit signals at the tluee example elevations. Thus, these exaznple gains can be used to provide correct levels in left and right processes, each of which, in this example, includes a 3-way summation of filter outputs (from first and second filters 266 and 268) and a scaled input (fiom gain component 270).
[0118] I1l one einbodunent, as shown in Figure 14, the filters and gain adjusted left and right signals can be sunul=ied by respective suimners 272a and 272b so as to yield left and right outpttt signals 274.
[0119] Figure 15 shows a block diagrain of one einbodiment of an IID
(interatiral interisity difference) adjustment coinponent 280 that ca.n be implemented as the coinponent 228 of Figure 12. As shown, left and right signals 282 are shown to be input to the IID coinponent 280. b=i one einbodiment, the input signals 282 can be provided by the positional filters coinponent 260 of Figure 14.
[0120] In one einbodiment, the IID cornponent 280 can adjust the intensity of the weaker chaiulel signal in a first compensation component 284, and also adjust the intensity of the stronger chaiulel signal in a second compensation component 286. For example, suppose that a sound sotirce is located at 0 = 10 (that is, to the right side by 10 degrees). I1i such a situation, the right chaiu=iel can be considered to be the stronger channel, and the left chaimel the weaker charniel. Thus, the first compensation 284 can be applied to the left signal, and the second compensation 286 to the rigllt signal.
[0121] In one einbodiment, the level of the weaker channel sigiial can be adjusted by an aniount given as Gaiiz = lcos 0(Fixed Filter Level Differerace_per Ele>>ation-1. 0) 1+ 1Ø (2) TlZUs, if 0= 0 degree (directly in front), the gain of the wealcer chaiu=iel is adjusted by tlie original filter level difference. If 0= 90 degrees (directly to the riglit), Gain = 1, and no gain adjustinent is made to the wealcer channel.
[0122] Iri one embodinient, the level of the stronger chaiu=iel signal can be adjusted by an aniotuit given as Gain = sita 0+ 1. 0. (3) Thus, if 0= 0 degree (directly in fiont), Gain = 1, and no gain adjustment is made to tlie stronger chamlel. If 0= 90 degrees (directly to the right), Gain = 2, thereby providing a 6dB gain compensation to roughly match the overall loudness at different values of 0.
[0123) If a sotuid source is substantially statlonaly or moves substantially within a given hemi-plane, the saine filters can be used to_ generate filter responses.
Iiitensity conipensations for weaker and stronger hearing sides can be provided by the IID
compensations as described above. If a sound source moves from one hemi-plane to another heini-plane, however, the filters can also change. Tl1us, IDDs that are based on the filter levels may not provide coinpensations in such a way as to make a smooth heini-plane transition. Such a transition can result in a detectable sudden shift in intensity as the sound source moves between hemi-planes.
[0124] Tllus, in one embodiment as shown in Figt.lre 15, the IID component 280 can further include a crossfade coinponent 290 that provides smoother transitions to a new heini-plane as the source moves from an old heini-plane to the new one. An exainple of IID crossfade operation is described below in greater detail.
[01251 As shown in Figure 15, left and right intensity adjusted signals 288 are sliown to be output by the IID component 280. As described above, the intensity adjusted signals 288 may or may not be crossfaded. For example, if the source is stationary or moves within a given hemi-plane, there may not be a need to crossfade, since the filters remain substantially the same. If the source moves between hemi-planes, crossfading may be desired to reduce or substantially eliininate sudden shifts in IIDs.
[0126] Figure 16 shows one einbodinient of a process 300 that can be perfonned by the ITD component described above in reference to Figures 12 and 13. In a process block 302, sound source position angles 0 and cp are detennined from input data.
Il-i a process block 304, maximized ITD samples are deterinined for each sainpling rate.
In a process block 306, ITD offset values for left and right data are determined. In a process block 308, delays corresponding to the ITD offset values are introduced to the left and right data.
[0127] Iil one einbodiment, the process 300 can further include a process block where crossfading is perfoi7ned on the left and right ITD adjusted signals to account for motion of the sound source.

[0128] Figure 17 shows one embodiment of a process 310 that can be perfoi7ned by the positional filters component and/or the IID coinponent described above in reference to Figtires 12, 14, and 15. hi a process block 312, IID
conipensation gains can be deterinined. Equations 2 and 3 are examples of such compensation gain calculations.
[0129] Iil a decision block 314, the process 310 deteianines whether the sotuzd source is at the fiont and to the right ("F.R."). If the answer is "Yes,"
front filters (at appropriate elevation) are applied to the left and right data in a process block 316. The filter-applied data and the gain adjusted data are sununed to generate position-filters output signals. Because the source is at the right side, the right data is the stronger chainzel, and the left data is the weaker chamlel. Thus, in a process bloclc 318, first coinpensation gain (Equation 2) is applied to the left data. In a process'block 320, second compensation gain (Equation 3) is applied to the right data. The position filtered and gain adjusted left and right signals are output in a process block 322.
[0130] If the answer to the decision block 314 is "No," the sound source is not at the front and to the rigllt. Thus, the process 310 proceeds to other remaining quadrants.
[0131] In a decision block 324, the process 310 deteiinines whether the sound source is at the rear and to the right ("R.R."). If the answer is "Yes," rear filters (at appropriate elevation) are applied to the left and right data in a process block 326. The filter-applied data and the gain adjusted data are stunined to generate position-filters output signals. Becatise the source is at the riglit side, the right data is the stronger chamzel, and the left data is the weaker chainlel. Thus, in a process block 328, first coinpensation gain (Equation 2) is applied to the left data. In a process block 330, second coinpensation gain (Equation 3) is applied to the right data. The position filtered and gain adjusted left and riglit signals are output in a process block 332.
[0132] If the answer to the decision block 324 is "No," the sound source is not at F.R. or R.R. Thus, the process 310 proceeds to other remaining quadrants.
[0133] In a decision block 334, the process 310 deterinines whetlier the sound source is at the rear and to the left ("R.L."). If the answer is "Yes," rear filters (at appropriate elevation) are applied to the left and right data in a process block 336. The filter-applied data and the gain adjusted data are summed to generate position-filters output signals. Because the source is at the left side, the left data is the stronger cllannel, an.d the right data is the weaker chaiuiel. Thus, in a process block 338, second conlpensation gain (Equation 3) is applied to the left data. Iii a process block 340, first compensation gain (Equation 2) is applied to the right data. The position filtered and gain adjusted left and right signals are output in a process block 342.

[0134] If the answer to the decision block 334 is "No," the sound source is not at F.R., R.R., or R.L. Thus, the process 310 proceeds with the sound source considered as being at the front and to the left ("F.L.").

[0135] Iti a process block 346, front filters (at appropriate elevation) are applied to the left and right data. The filter-applied data and the gain adjusted data are suiruned to generate position-filters output signals. Because the source is at the left side, the left data is the stronger cllannel, and the right data is the weaker channel. Tlius, in a process block 348, second coinpensation gain (Equation 3) is applied to the left data. In a process block 350, first coinpensation gain (Equation 2) is applied to the right data. The position filtered and gain adjusted left and rigllt signals are output in a process block 352.
[0136] Figure 18 shows one einbodiment of a process 390 that can be perfonned by the audio signal processing configuration 220 described above in reference to Figures 12-15. In particular, the process 390 can accoinmodate motion of a sound source, either within a hemi-plane, or between heini-planes.

[0137] Iii a process block 392, mono input signal is obtained. In a process block 392, position-based ITD is detennined and applied to the input signal.
Ii1 a decision block 396, the process 390 deterniines whetller the sound source has changed position. If the m-iswer is "No," data can be read from the left and right delay lines, have ITD delay applied, and written back to the delay lines. If the answer is "Yes," the process 390 in a process block 400 determines a new ITD delay based on the new position. In a process block 402, crossfade can be perfonned to provide smootli transition between the previous and new ITD delays.

[0138] b1 one einbodiment, crossfading caii be perfoinzed by reading data from previous and cui7ent delay lines. Tl1us, for exanzple, eacll time the process 390 is called, 0 and cp values are conlpared witll those in the histoiy to deterinine whether the source location has changed. If there is no change, new ITD delay is not calculated;
and the existing ITD delay is used (process block 398). If there is a change, new ITD
delay is calculated (process block 400); and crossfading is perfonned (process block 402). Ili one einbodunent, ITD crossfading can be achieved by gradually increasing or decreasing the ITD delay value from the previous value to the new value.

[0139] Iii one embodiment, the crossfading of the ITD delay values can be triggered wlien source's position cliange is detected, and the gradual change can occur during a plurality of processing cycles. For exainple, if the ITD delay has an old valtie ITDt,I,i, and a new value ITDM,,,, the crossfading transition can occur during N processing cycles: ITD(1) = ITDo<<l, ITD(2) = ITDo1r1+,~ITD/N, ..., ITD(N-1) =
ITDo1r&dITD(N-1)/N, ITD(N) = ITDõe,,,; where dITD = ITD,te,,, - ITDo1,1 (assuming that ITDõ,,,, >
ITDoId).

[0140] As shown in Figure 18, tlle ITD adjusted data can be furtller processed with or without ITD crossfading, so that in a process bloclc 404, positional filtering can be perfonned based on the current values of 0 and cp. For the purpose of description of Figure 18, it will be assumed that the process block 404 also includes IID
conlpensations.
[0141] In a decision block 406, the process 390 determines whether there has been a change in the hemi-plane. If the answer is "No," no crossfading of IID
compensations is perfonned. If tlie answer is "Yes," the process 390 in a process block 408 perfonns another positional filtering based on the previous values of 0 and cp. For the purpose of description of Figure 18, it will be assuined that the process block 408 also includes IID compensations. lii a process block 410, crossfading can be perfonned between the IID compensation values and/or when filters are changed (for example, when switching filters corresponding to previous and ctinent hemi-planes). Such crossfading can be configured to smooth out glitches or sudden shifts when applying different IID
gains, switching of positional filters, or both.

[0142] In one embodiment, IID crossfading can be achieved by gradually increasing or decreasing the IID coinpensation gain value fiom the previous values to the new values, and/or the filter coefficients from the previous set to the new set. Iii one embodiment, the crossfading of the IID gain values can be triggered when a charige in hemi-plaile is detected, and the gradual changes of the IID gain values can occur during a plurality of processing cycles. For example, if a given IID gain has an old value IIDo1,I, and a new value IIDõ,,, the crossfading transition can occur din=ing N
processing cycles:
IID(1) = IIDor,r, IID(2) = IIDo&+dIID/N, ..., IID(7V-1) = IIDo1d-I-N-1D(N-1)/N, IID(N) =
IIDõe,,;, where AIID = IID,Ieu, - IIDo<<< (assuming that IIDõe,v > IID,1(1).
Similar gradual changes can be introduced for the positioizal filter coefficients for crossfading positional filters.
~

[0143] As fiirther shown in Figure 18, the positional filtered aaid IID
colnpensated signals, whether or not IID crossfaded, yields output signals that can be amplified in a process bloclc 412 so as to yield a processed stereo output 414.
[0144] In some einbodiments, various feattues of the ITD, ITD crossfading, positional filtering, IID, TID crossfading, or combinations thereof, can be coinbined with otlier sound effect eiiha.ncing features. Figure 19 shows a block diagrain of one embodiment of a signal processing configuration 420 where sound signal can be processed before and/or after the ITD/positional filtering/IDD processing. As shown, sotuzd signal frozn a source 422 can be processed for sample rate conversion (SRC) 424 and adjusted for Doppler effect 426 to simulate a moving sotuld source.
Effects accounting for distance 428 and the listener-source orientation 430 can also be iinpleinented. In one embodiment, sound signal processed in the foregoing maiu-ier can be provided to the ITD compoarient 434 as an input signal 432. ITD processing, as well as processing by the positional-filters 436 and IID 438, can be perfoiined in a maimer as described herein.

[0145] As ftutlier shown in Figure 19, the output from the IID coinponent 438 can be processed further by a reverberation coinponent 440 to provide reverberation effect in the output signal 442.

[0146] In one embodiment, ftulctionalities of the SRC 424, Doppler 426, Distance 428, Orientation 430, and Reverberation 440 coniponents can be based on lu.lown tecluiiques; and thus need not be described further.

[0147] Figtue 20 shows that in one embodiment; a plurality of audio signal processing chains (depicted as 1 to N, wit11 N> 1) can process signal fiom a pltuality of sotirces 452. h-i one embodiinent, each chain of SRC 454, Doppler 456, Distance 458, Orientation 460, ITD 462, Positional filters 464, and IID 466 can be canfigured similar to the single-chain example 420 of Figtue 19. The left and riglzt outputs from the plurality of IIDs 466 can be combined in respective dowiullix coinponents 470 and 474, and the two dowmiiixed signals can be reverberation processed (472 and 476) so as to produce output signals 478.

[0148] Iu one embodiment, functionalities of the SRC 454, Doppler 456, Distazice 458, Orientation 460, Downmix (470 and 474), an.d Reverberation (472 and 476) coznponents can be based on kn.own tecluliques; and thus need not be described furtller.

[0149] Figure 21 sllows that in one elnbodinlent, other configurations are possible. For exainple, eacll of a plurality of sound data streams (depicted as exainple streams 1 to 8) 482 can be processed via reverberation 484, Doppler 486, distance 488, and orientation 490 coinponents. The outpttt from the orientation component 490 can be input to an ITD component 492 tliat outputs left and right signals.
[0150] As shown in Figure 21, the outputs of the eight ITDs 492 can be directed to corresponding position filters via a dowiunix component 494. Six such sets of position filters 496 are depicted to correspond to the six example hemi-planes. The position filters 496 apply their respective filters to the inputs provided thereto, and provide coiresponding left and right output signals. For the puipose of description of Figure 21, it will be assuined that the position filters can also provide the IID
compensation fiulctionality.
[0151] As shown in Figure 21, the outputs of the position filters 496 caii be ftu-ther downmixed by a dowiunix coinponent 498 that mixes 2D streams (sucll as normal stereo contents) with 3D streams that are processed as described herein. In one embodiment, such downinixing can avoid clipping in audio signals. The dowzunixed output signals can be furtlier processed by sound enhancing component 500 such as SRS
"WOW XT" application to generate the output signals 502.

[0152] As seen by way of examples, various configurations are possible for incorporating the features of the ITD, positional filters, and/or IID with various other sound effect enliancing techniques. Thtis, it will be understood that configurations other than those shown are possible.

[0153] Figures 22A and 22B show non-liiniting example configurations of how various fiinctionalities of positional filtering can be iunpleinented. In one example systezn 510 shown in Figure 22A, positional filtering can be perfonned by a component indicated as the 3D sound application programniing interface (API) 520. Such an API ca.n provide the positional filtering ftuzctionality wllile providing an interface between the operating system 518 and a znultimedia application 522. An audio output component 524 can then provide an output signal 526 to an output device such as spealcers or a headphone.

[0154] tu one embodiment, at least some portion of the 3D sound API 520 can reside in the prograin memory 516 of the system 510, and be under the control of a processor 514. Iii one einbodiment, the systein 510 can also include a display component that can provide visual input to the listener. Visual cues provided by the display 512 and the sound processing provided by the API 520 can enliance the audio-visual effect to the listener/viewer.

[0155] Figure 22B shows another exaniple system 530 that can also include a display coinponent 532 and an audio output component 538 that outputs position filtered signal 540 to devices such as speakers or a headphone. In one embodiment, the system 530 can include an intenlal, or access, to data 534 that have at least some infonnatioil needed to for position filtering. For exaznple, various filter coefficients and other iilfornzation may be provided from the data 534 to some application (not shown) beulg executed under the control of a processor 536. Other configurations are possible.

[0156] As described herein, various features of positional filtering and associated processing tecluiiques allow generation of realistic tluee-diinensional soLUld effect without heavy coinputation requirements. As such, various features of the present disclosure can be particularly useful for iinpleinentations in portable devices where computation power and resources may be liinited.

[0157] Figures 23A and 23B show non-limiting examples of portable devices wllere various functionalities of positional-filtering can be iinplemented.
Figure 23A
shows that in one enibodiment, the 3D audio ftinetionality 556 cm.1 be implemented in a portable device such as a cell phone 550. Many cell phones provide inultiinedia fiu-ictionalities that can include a video display 552 aiid an audio output 554. Yet, such devices typically have limited computing power and resources. Tlius, the 3D
audio fiinctionality 556 can provide an enhanced listening experience for the user of the cell phone 550.

[0158] Figure 23B shows that in anoth.er exainple implementation 560, sui7ound sound effect can be simulated (depicted by simulated sound sources 126) by positional-filtering. Output signals 564 provided to a headphone 124 can result in the listener 102 experiencing surround-sound effect while listening to only the left and riglit speal{ers of the headphone 124.

[0159] For the exainple suiTound-sound configuration 560, positional-filtering can be configLUed to process five sound sources (for exainple, five processing chains in Figures 20 or 21). In one einbodiment, infoi7nation about the location of the sound sotuces (for example, which of the five simulated spealcers) can be encoded in the input data. Since the five spealcers 126 do not move relative to the listener 102, positions of five sound sources can be fixed in the processing. Thus, ITD detennination can be simplified; ITD crossfading can be eliminated; filter selection(s) can be fixed (for exainple, if the sources are placed on the horizontal plane, only the front and rear horizontal heini-planes need to be used); IID conlpensation can be simplified;
and IlD
crossfading cali be eliininated.

[0160] Otlier iinplementations on portable as well as non-poi-table devices are possible.

[0161] In the description herein, various fiinetionalities are described and depicted in terins of coinponents or n-lodules. Such depictions are for the purpose of desciiption, and do not necessarily mean physical boundaries or packaging configurations.
For example, Figure 12 (and other FigLUes) depicts ITD, Positional Filters, and IID as coinponents. It will be understood that the fiuictionalities of these components can be impleinented in a single device/software, separate devices/softwares, or any combination thereo~ Moreover, for a given component such as the Positional Filters, its fiinctionalities can be implemented in a single device/software, plurality of devices/softwares, or any combination tliereof.

[0162] Iti general, it will be appreciated that the processors can include, by way of exainple, coinputers, prograin logic, or other substrate configurations representing .
data and instructions, which operate as described herein. Iil other embodiments, the processors can include controller circuitty, processor circuitry, processors, general purpose single-cllip or nlulti-chip microprocessors, digital signal processors, einbedded inicroprocessors, microcontrollers and the like.

[0163] Furthennore, it will be appreciated that in one embodiment, the program logic may advantageously be implenzented as one or znore components.
The coznponents may advantageously be configl.ired to execute on one or more processors.
The components include, but are not liinited to, software or hardware coinponents, modules such as software modules, object-oriented software components, class coznponents and task components, processes methods, functions, attributes, procedures, subroutines, segments of program code, drivers, finnware, microcode, circuitiy, data, databases, data structures, tables, arrays, and variables.

[0164] Although the above-disclosed einbodiments have shown, described, and pointed out the fiuldamental novel feattues of the invention as applied to the above-disclosed embodiments, it should be understood that various oinissions, substitutions, and changes in the form of the detail of the devices, systems, aiid/or methods shown may be niade by those skilled in the art withotit departiiig from the scope of the invention.
Consequently, the scope of the inventioil should not be limited to the foregoiilg descriptioil, but should be defined by the appended claims.

Claims (46)

1. A method for processing digital audio signals, coinprising:
receiving one or more digital signals, each of said one or more digital signals having information about spatial position of a sound source relative to a listener;
selecting one or more digital filters, each of said one or more digital filters being formed from a particular range of a hearing response function; and applying said one or more filters to said one or more digital signals so as to yield corresponding one or more filtered signals, each of said one or more filtered signals having a simulated effect of said hearing response function applied to said sound source.
2. The method of Claim 1, wherein said one or more digital signals comprise left and right digital signals to be output to left and right speakers.
3. The method of Claim 2, wherein said left and right digital signals are adjusted for interaural time difference (ITD) based on said spatial position of said sound source relative to said listener.
4. The method of Claim 3, wherein said ITD adjustment comprises:
receiving a mono input signal having information about said spatial position of said sound source;

determining a time difference value based on said spatial information; and generating left and right signals by introducing said time difference value to said mono input signal.
5. The method of Claim 7, wherein said time difference value comprises a quantity that is proportional to absolute value of sin.theta. cos.phi., where .theta. represents an azimuthal angle of said sound source relative to the front of said listener, and .phi. represents an elevation angle of said sound source relative to a horizontal plane defined by said listener's ears and the front direction.
6. The method of Claim 4, wherein said determination of time difference value is performed when said spatial position of said sound source changes.
7. The method of Claim 6, further comprising performing a crossfade transition of said time difference value between the previous value and the current value.
8. The method of Claim 7, wherein said crossfade transition comprises changing the time difference value for use in said generation of left and right signals from the previous value to the current value during a plurality of processing cycles.
9. The method of Claim 1, wherein said one or more filtered signals comprise left and right filtered signals to be output to left and right speakers.
10. The method of Claim 9, further comprising adjusting each of said left and right filtered signals for interaural intensity difference (IID) to account for any intensity differences that may exist and not accounted for by said application of one or more filters.
11. The method of Claim 10, wherein said adjustment of said left and right filtered signals for IID comprises:
determining whether said sound source is positioned at left or right relative to said listener;
assigning as a weaker signal the left or right filtered signal that is on the opposite side as the sound source;
assigning as a stronger signal the other of the left or right filtered signal;

adjusting said weaker signal by a first compensation; and adjusting said stronger signal by a second compensation.
12. The method of Claim 11, wherein said first compensation comprises a compensation value that is proportional to cos.theta., where .theta.
represents an azimuthal angle of said sound source relative to the front of said listener.
13. The method of Claim 11, wherein said second compensation comprises a compensation value that is proportional to sin.theta., where .theta.
represents an azimuthal angle of said sound source relative to the front of said listener.
14. The method of Claim 11, wherein said adjustment of said left and right filtered signals for IID is performed when new one or more digital filters are applied to said left and right filtered signals due to selected movements of said sound source.
15. The method of Claim 14, further comprising performing a crossfade transition of said first and second compensation values between the previous values and the current values.
16. The method of Claim 15, wherein said crossfade transition comprises changing the first and second compensation values during a plurality of processing cycles.
17. The method of Claim 1, further comprising performing at least one of the following processing steps either before said receiving of said one or more digital signals or after said applying of said one or more filters: sample rate conversion, Doppler adjustment for sound source velocity, distance adjustment to account for distance of said sound source to said listener, orientation adjustment to account for orientation of said listener's head relative to said sound source, or reverberation adjustment.
18. The method of Claim 1, wherein said application of said one or more digital filters to said one or more digital signals simulates an effect of motion of said sound source about said listener.
19. The method of Claim 1, wherein said application of said one or more digital filters to said one or more digital signals simulates an effect of placing said sound source at a selected location about said listener.
20. The method of Claim 19, further comprising simulating effects of one or more additional sound sources to simulate an effect of a plurality of sound sources at selected locations about said listener.
21. The method of Claim 19, wherein said one or more digital signals comprise left and right digital signals to be output to left and right speakers and said plurality of sound sources comprise more than two sound sources such that effects of more than two sound sources are simulated with said left and right speakers.
22. The method of Claim 21, wherein said plurality of sound sources comprise five sound sources arranged in a manner similar to one of surround sound arrangements, and wherein said left and right speakers are positioned in a headphone, such that surround sound effects are simulated by said left and right filtered signals provided to said headphone.
23. A positional audio engine for processing digital signal representative of a sound from a sound source, comprising:
a filter selection component configured to select one or more digital filters, each of said one or more digital filters being formed from a particular range of a hearing response function, said selection based on spatial position of said sound source relative to a listener;

a filter application component configured to apply said one or more digital filters to one or more digital signals so as to yield corresponding one or more filtered signals, each of said one or more filtered signals having a simulated effect of said hearing response function applied to said sound from said sound source.
24. The audio engine of Claim 23, wherein said hearing response function comprises a head-related transfer function (HRTF).
25. The audio engine of Claim 24, wherein said particular range comprises a particular range of frequency within said HRTF.
26. The audio engine of Claim 25, wherein said particular range of frequency is substantially within or overlaps with a range of frequency that provides a location-discriminating sensitivity to an average human's hearing that is greater than an average sensitivity among an audible frequency.
27. The audio engine of Claim 25, wherein said particular range of frequency includes or substantially overlaps with a peak structure in said HRTF.
28. The audio engine of Claim 27, wherein said peak structure is substantially within or overlaps with a range of frequency between about 2.5 KHz and about 7.5 KHz.
29. The audio engine of Claim 27, wherein said peak structure is substantially within or overlaps with a range of frequency between about 8.5 KHz and about 18 KHz.
30. The audio engine of Claim 23, wherein said one or more digital filters comprise a plurality of digital filters.
31. The audio engine of Claim 30, wherein each of said one or more digital signals is split into the same number of signals as the number of said plurality of digital filters such that said plurality of digital filters are applied in parallel to said plurality of split signals.
32. The audio engine of Claim 31, wherein said each of one or more filtered signals is obtained by combining said plurality of split signals filtered by said plurality of digital filters.
33. The audio engine of Claim 32, wherein said combining comprises summing of said plurality of split signals.
34. The audio engine of Claim 30, wherein said plurality of digital filters comprise first and second digital filters.
35. The audio engine of Claim 34, wherein each of said first and second digital filters comprises a filter that yields a response that is substantially maximally flat in a passband portion and rolls off towards substantially zero in a stopband portion of said hearing response function.
36. The audio engine of Claim 35, wherein each of said first and second digital filters comprises a Butterworth filter.
37. The audio engine of Claim 35, wherein said passband portion for one of said first and second digital filters is defined by a frequency range between about 2.5 KHz and about 7.5 KHz.
38. The audio engine of Claim 35, wherein said passband portion for one of said first and second digital filters is defined by a frequency range between about 8.5 KHz and about 18 KHz.
39. The audio engine of Claim 23, wherein said selection of said one or more digital filters is based on a finite number of geometric positions about said listener.
40. The audio engine of Claim 39, wherein said geometric positions comprise a plurality of hemi-planes, each hemi-plane defined by an edge along a direction between the ears of said listener and by an elevation angle .phi. relative to a horizontal plane defined by said ears and the front direction for said listener.
41. The audio engine of Claim 40, wherein said plurality of hemi-planes are grouped into one or more front hemi-planes and one or more rear hemi-planes.
42. The audio engine of Claim 41, wherein said front hemi-planes comprise hemi-planes at front of said listener and at elevation angles of approximately 0 and +/- 45 degrees, and said rear hemi-planes comprise hemi-planes at rear of said listener and at elevation angles of approximately 0 and +/- 45 degrees.
43. A system for processing digital audio signals, comprising:
an interaural time difference (ITD) component configured to receive a mono input signal and generate left and right ITD-adjusted signals to simulate an arrival time difference of sound arriving at left and right ears of a listener from a sound source, said mono input signal having information about spatial position of said sound source relative said listener;
a positional filter component configured to receive said left and right ITD-adjusted signals, apply one or more digital filters to each of said left and right ITD-adjusted signals to generate left and right filtered digital signals, each of said one or more digital filters based on a particular range of a hearing response function, such that said left and right filtered digital signals simulate said hearing response function; and an interaural intensity difference (IID) component configured to receive said left and right filtered digital signals and generate left and right IID-adjusted signal to simulate an intensity difference of said sound arriving at said left and right ears.
44. The system of Claim 43, further comprising at least one of the following:
a sample rate conversion component, a Doppler adjustment component configured to simulate sound source velocity, a distance adjustment component configured to account for distance of said sound source to said listener, an orientation adjustment component configured to account for orientation of said listener's head relative to said sound source, or a reverberation adjustment component to simulate reverberation effect.
45. A system for processing digital audio signals, comprising:
a plurality of signal processing chains, each chain comprising:
an interaural time difference (ITD) component configured to receive a mono input signal and generate left and right ITD-adjusted signals to simulate an arrival time difference of sound arriving at left and right ears of a listener from a sound source, said mono input signal having information about spatial position of said sound source relative said listener;
a positional filter component configured to receive said left and right ITD-adjusted signals, apply one or more digital filters to each of said left and right ITD-adjusted signals to generate left and right filtered digital signals, each of said one or more digital filters based on a particular range of a hearing response function, such that said left and right filtered digital signals simulate said hearing response function; and an interaural intensity difference (IID) component configured to receive said left and right filtered digital signals and generate left and right IID-adjusted signal to simulate an intensity difference of said sound arriving at said left and right ears.
46. An apparatus, comprising:
a means receiving one or more digital signals;
a means for selecting one or more digital filters based on information about spatial position of a sound source;
a means for applying said one or more filters to said one or more digital signals so as to yield corresponding one or more filtered signals that simulate an effect of a hearing response function.
CA2621175A 2005-09-13 2006-09-13 Systems and methods for audio processing Active CA2621175C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US71658805P 2005-09-13 2005-09-13
US60/716,588 2005-09-13
PCT/US2006/035446 WO2007033150A1 (en) 2005-09-13 2006-09-13 Systems and methods for audio processing

Publications (2)

Publication Number Publication Date
CA2621175A1 true CA2621175A1 (en) 2007-03-22
CA2621175C CA2621175C (en) 2015-12-22

Family

ID=37496972

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2621175A Active CA2621175C (en) 2005-09-13 2006-09-13 Systems and methods for audio processing

Country Status (8)

Country Link
US (2) US8027477B2 (en)
EP (1) EP1938661B1 (en)
JP (1) JP4927848B2 (en)
KR (1) KR101304797B1 (en)
CN (1) CN101263739B (en)
CA (1) CA2621175C (en)
PL (1) PL1938661T3 (en)
WO (1) WO2007033150A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9084050B2 (en) * 2013-07-12 2015-07-14 Elwha Llc Systems and methods for remapping an audio range to a human perceivable range

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007033150A1 (en) 2005-09-13 2007-03-22 Srs Labs, Inc. Systems and methods for audio processing
WO2007123788A2 (en) * 2006-04-03 2007-11-01 Srs Labs, Inc. Audio signal processing
WO2007119058A1 (en) * 2006-04-19 2007-10-25 Big Bean Audio Limited Processing audio input signals
RU2454825C2 (en) * 2006-09-14 2012-06-27 Конинклейке Филипс Электроникс Н.В. Manipulation of sweet spot for multi-channel signal
US8050434B1 (en) 2006-12-21 2011-11-01 Srs Labs, Inc. Multi-channel audio enhancement system
ATE484761T1 (en) * 2007-01-16 2010-10-15 Harman Becker Automotive Sys APPARATUS AND METHOD FOR TRACKING SURROUND HEADPHONES USING AUDIO SIGNALS BELOW THE MASKED HEARING THRESHOLD
KR20080079502A (en) * 2007-02-27 2008-09-01 삼성전자주식회사 Stereophony outputting apparatus and early reflection generating method thereof
PL2198632T3 (en) * 2007-10-09 2014-08-29 Koninklijke Philips Nv Method and apparatus for generating a binaural audio signal
TWI475896B (en) * 2008-09-25 2015-03-01 Dolby Lab Licensing Corp Binaural filters for monophonic compatibility and loudspeaker compatibility
JP5694174B2 (en) * 2008-10-20 2015-04-01 ジェノーディオ,インコーポレーテッド Audio spatialization and environmental simulation
JP5499513B2 (en) * 2009-04-21 2014-05-21 ソニー株式会社 Sound processing apparatus, sound image localization processing method, and sound image localization processing program
KR101040086B1 (en) * 2009-05-20 2011-06-09 전자부품연구원 Method and apparatus for generating audio and method and apparatus for reproducing audio
EP2262285B1 (en) 2009-06-02 2016-11-30 Oticon A/S A listening device providing enhanced localization cues, its use and a method
KR20120004909A (en) * 2010-07-07 2012-01-13 삼성전자주식회사 Method and apparatus for 3d sound reproducing
KR20120040290A (en) * 2010-10-19 2012-04-27 삼성전자주식회사 Image processing apparatus, sound processing method used for image processing apparatus, and sound processing apparatus
KR101827032B1 (en) 2010-10-20 2018-02-07 디티에스 엘엘씨 Stereo image widening system
US9164724B2 (en) * 2011-08-26 2015-10-20 Dts Llc Audio adjustment system
KR102160248B1 (en) 2012-01-05 2020-09-25 삼성전자주식회사 Apparatus and method for localizing multichannel sound signal
US20130202132A1 (en) * 2012-02-03 2013-08-08 Motorola Mobilitity, Inc. Motion Based Compensation of Downlinked Audio
US8704070B2 (en) 2012-03-04 2014-04-22 John Beaty System and method for mapping and displaying audio source locations
CN103796150B (en) * 2012-10-30 2017-02-15 华为技术有限公司 Processing method, device and system of audio signals
KR101815079B1 (en) 2013-09-17 2018-01-04 주식회사 윌러스표준기술연구소 Method and device for audio signal processing
US10204630B2 (en) 2013-10-22 2019-02-12 Electronics And Telecommunications Research Instit Ute Method for generating filter for audio signal and parameterizing device therefor
EP3005362B1 (en) * 2013-11-15 2021-09-22 Huawei Technologies Co., Ltd. Apparatus and method for improving a perception of a sound signal
WO2015099429A1 (en) * 2013-12-23 2015-07-02 주식회사 윌러스표준기술연구소 Audio signal processing method, parameterization device for same, and audio signal processing device
EP4294055A1 (en) 2014-03-19 2023-12-20 Wilus Institute of Standards and Technology Inc. Audio signal processing method and apparatus
KR101856540B1 (en) 2014-04-02 2018-05-11 주식회사 윌러스표준기술연구소 Audio signal processing method and device
EP4329331A2 (en) * 2014-04-02 2024-02-28 Wilus Institute of Standards and Technology Inc. Audio signal processing method and device
US9042563B1 (en) 2014-04-11 2015-05-26 John Beaty System and method to localize sound and provide real-time world coordinates with communication
CN104125522A (en) * 2014-07-18 2014-10-29 北京智谷睿拓技术服务有限公司 Sound track configuration method and device and user device
CN107073277B (en) * 2014-10-08 2020-05-15 Med-El电气医疗器械有限公司 Neural coding with short inter-pulse intervals
CN114849250A (en) 2014-11-30 2022-08-05 杜比实验室特许公司 Large format theater design for social media linking
US9551161B2 (en) 2014-11-30 2017-01-24 Dolby Laboratories Licensing Corporation Theater entrance
CN104735588B (en) 2015-01-21 2018-10-30 华为技术有限公司 Handle the method and terminal device of voice signal
GB2535990A (en) * 2015-02-26 2016-09-07 Univ Antwerpen Computer program and method of determining a personalized head-related transfer function and interaural time difference function
KR20160122029A (en) * 2015-04-13 2016-10-21 삼성전자주식회사 Method and apparatus for processing audio signal based on speaker information
US20170325043A1 (en) 2016-05-06 2017-11-09 Jean-Marc Jot Immersive audio reproduction systems
CN106507266B (en) * 2016-10-31 2019-06-11 深圳市米尔声学科技发展有限公司 Audio processing equipment and method
CN108076415B (en) * 2016-11-16 2020-06-30 南京大学 Real-time realization method of Doppler sound effect
US10979844B2 (en) 2017-03-08 2021-04-13 Dts, Inc. Distributed audio virtualization systems
CN110111804B (en) * 2018-02-01 2021-03-19 南京大学 Self-adaptive dereverberation method based on RLS algorithm
US10856097B2 (en) 2018-09-27 2020-12-01 Sony Corporation Generating personalized end user head-related transfer function (HRTV) using panoramic images of ear
US11906642B2 (en) 2018-09-28 2024-02-20 Silicon Laboratories Inc. Systems and methods for modifying information of audio data based on one or more radio frequency (RF) signal reception and/or transmission characteristics
US11019450B2 (en) 2018-10-24 2021-05-25 Otto Engineering, Inc. Directional awareness audio communications system
CN109637550B (en) * 2018-12-27 2020-11-24 中国科学院声学研究所 Method and system for controlling elevation angle of sound source
US11113092B2 (en) * 2019-02-08 2021-09-07 Sony Corporation Global HRTF repository
US11451907B2 (en) 2019-05-29 2022-09-20 Sony Corporation Techniques combining plural head-related transfer function (HRTF) spheres to place audio objects
US11347832B2 (en) 2019-06-13 2022-05-31 Sony Corporation Head related transfer function (HRTF) as biometric authentication
US11146908B2 (en) 2019-10-24 2021-10-12 Sony Corporation Generating personalized end user head-related transfer function (HRTF) from generic HRTF
US11070930B2 (en) 2019-11-12 2021-07-20 Sony Corporation Generating personalized end user room-related transfer function (RRTF)

Family Cites Families (83)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5412731A (en) * 1982-11-08 1995-05-02 Desper Products, Inc. Automatic stereophonic manipulation system and apparatus for image enhancement
US4817149A (en) * 1987-01-22 1989-03-28 American Natural Sound Company Three-dimensional auditory display apparatus and method utilizing enhanced bionic emulation of human binaural sound localization
US4836329A (en) * 1987-07-21 1989-06-06 Hughes Aircraft Company Loudspeaker system with wide dispersion baffle
US4819269A (en) * 1987-07-21 1989-04-04 Hughes Aircraft Company Extended imaging split mode loudspeaker system
US4841572A (en) * 1988-03-14 1989-06-20 Hughes Aircraft Company Stereo synthesizer
US4866774A (en) * 1988-11-02 1989-09-12 Hughes Aircraft Company Stero enhancement and directivity servo
DE3932858C2 (en) * 1988-12-07 1996-12-19 Onkyo Kk Stereophonic playback system
FR2650294B1 (en) 1989-07-28 1991-10-25 Rhone Poulenc Chimie PROCESS FOR TREATING SKINS, AND SKINS OBTAINED
US5173944A (en) * 1992-01-29 1992-12-22 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Head related transfer function pseudo-stereophony
DE69322805T2 (en) * 1992-04-03 1999-08-26 Yamaha Corp Method of controlling sound source position
US5333201A (en) * 1992-11-12 1994-07-26 Rocktron Corporation Multi dimensional sound circuit
US5319713A (en) * 1992-11-12 1994-06-07 Rocktron Corporation Multi dimensional sound circuit
US5438623A (en) * 1993-10-04 1995-08-01 The United States Of America As Represented By The Administrator Of National Aeronautics And Space Administration Multi-channel spatialization system for audio signals
DE69522971T2 (en) * 1994-02-25 2002-04-04 Henrik Moller Binaural synthesis, head-related transfer function, and their use
US5592588A (en) * 1994-05-10 1997-01-07 Apple Computer, Inc. Method and apparatus for object-oriented digital audio signal processing using a chain of sound objects
US5491685A (en) * 1994-05-19 1996-02-13 Digital Pictures, Inc. System and method of digital compression and decompression using scaled quantization of variable-sized packets
US6072877A (en) * 1994-09-09 2000-06-06 Aureal Semiconductor, Inc. Three-dimensional virtual audio display employing reduced complexity imaging filters
US5638452A (en) * 1995-04-21 1997-06-10 Rocktron Corporation Expandable multi-dimensional sound circuit
US5943427A (en) * 1995-04-21 1999-08-24 Creative Technology Ltd. Method and apparatus for three dimensional audio spatialization
US5661808A (en) * 1995-04-27 1997-08-26 Srs Labs, Inc. Stereo enhancement system
US5850453A (en) * 1995-07-28 1998-12-15 Srs Labs, Inc. Acoustic correction apparatus
DE69637736D1 (en) * 1995-09-08 2008-12-18 Fujitsu Ltd Three-dimensional acoustic processor with application of linear predictive coefficients
IT1281001B1 (en) * 1995-10-27 1998-02-11 Cselt Centro Studi Lab Telecom PROCEDURE AND EQUIPMENT FOR CODING, HANDLING AND DECODING AUDIO SIGNALS.
US5771295A (en) * 1995-12-26 1998-06-23 Rocktron Corporation 5-2-5 matrix system
US5742689A (en) * 1996-01-04 1998-04-21 Virtual Listening Systems, Inc. Method and device for processing a multichannel signal for use with a headphone
US5970152A (en) * 1996-04-30 1999-10-19 Srs Labs, Inc. Audio enhancement system for use in a surround sound environment
JPH09322299A (en) * 1996-05-24 1997-12-12 Victor Co Of Japan Ltd Sound image localization controller
US5995631A (en) * 1996-07-23 1999-11-30 Kabushiki Kaisha Kawai Gakki Seisakusho Sound image localization apparatus, stereophonic sound image enhancement apparatus, and sound image control system
JP3976360B2 (en) * 1996-08-29 2007-09-19 富士通株式会社 Stereo sound processor
US6421446B1 (en) * 1996-09-25 2002-07-16 Qsound Labs, Inc. Apparatus for creating 3D audio imaging over headphones using binaural synthesis including elevation
US5809149A (en) * 1996-09-25 1998-09-15 Qsound Labs, Inc. Apparatus for creating 3D audio imaging over headphones using binaural synthesis
US5784468A (en) * 1996-10-07 1998-07-21 Srs Labs, Inc. Spatial enhancement speaker systems and methods for spatially enhanced sound reproduction
JP3255348B2 (en) 1996-11-27 2002-02-12 株式会社河合楽器製作所 Delay amount control device and sound image control device
US6035045A (en) 1996-10-22 2000-03-07 Kabushiki Kaisha Kawai Gakki Seisakusho Sound image localization method and apparatus, delay amount control apparatus, and sound image control apparatus with using delay amount control apparatus
US5912976A (en) 1996-11-07 1999-06-15 Srs Labs, Inc. Multi-channel audio enhancement system for use in recording and playback and methods for providing same
JP3266020B2 (en) * 1996-12-12 2002-03-18 ヤマハ株式会社 Sound image localization method and apparatus
JP3208529B2 (en) 1997-02-10 2001-09-17 収一 佐藤 Back electromotive voltage detection method of speaker drive circuit in audio system and circuit thereof
US6281749B1 (en) * 1997-06-17 2001-08-28 Srs Labs, Inc. Sound enhancement system
US6078669A (en) * 1997-07-14 2000-06-20 Euphonics, Incorporated Audio spatial localization apparatus and methods
US6307941B1 (en) * 1997-07-15 2001-10-23 Desper Products, Inc. System and method for localization of virtual sound
US5835895A (en) * 1997-08-13 1998-11-10 Microsoft Corporation Infinite impulse response filter for 3D sound with tap delay line initialization
KR20010030608A (en) 1997-09-16 2001-04-16 레이크 테크놀로지 리미티드 Utilisation of filtering effects in stereo headphone devices to enhance spatialization of source around a listener
US6091824A (en) * 1997-09-26 2000-07-18 Crystal Semiconductor Corporation Reduced-memory early reflection and reverberation simulator and method
TW417082B (en) * 1997-10-31 2001-01-01 Yamaha Corp Digital filtering processing method, device and Audio/Video positioning device
KR19990041134A (en) * 1997-11-21 1999-06-15 윤종용 3D sound system and 3D sound implementation method using head related transfer function
EP1040466B1 (en) * 1997-12-19 2004-04-14 Daewoo Electronics Corporation Surround signal processing apparatus and method
CN100353664C (en) 1998-03-25 2007-12-05 雷克技术有限公司 Audio signal processing method and appts.
JP3686989B2 (en) 1998-06-10 2005-08-24 収一 佐藤 Multi-channel conversion synthesizer circuit system
JP3657120B2 (en) 1998-07-30 2005-06-08 株式会社アーニス・サウンド・テクノロジーズ Processing method for localizing audio signals for left and right ear audio signals
US6285767B1 (en) * 1998-09-04 2001-09-04 Srs Labs, Inc. Low-frequency audio enhancement system
US6590983B1 (en) * 1998-10-13 2003-07-08 Srs Labs, Inc. Apparatus and method for synthesizing pseudo-stereophonic outputs from a monophonic input
GB2342830B (en) 1998-10-15 2002-10-30 Central Research Lab Ltd A method of synthesising a three dimensional sound-field
US6993480B1 (en) * 1998-11-03 2006-01-31 Srs Labs, Inc. Voice intelligibility enhancement system
US6839438B1 (en) * 1999-08-31 2005-01-04 Creative Technology, Ltd Positional audio rendering
US7031474B1 (en) * 1999-10-04 2006-04-18 Srs Labs, Inc. Acoustic correction apparatus
US7277767B2 (en) * 1999-12-10 2007-10-02 Srs Labs, Inc. System and method for enhanced streaming audio
JP4304401B2 (en) 2000-06-07 2009-07-29 ソニー株式会社 Multi-channel audio playback device
JP4304845B2 (en) * 2000-08-03 2009-07-29 ソニー株式会社 Audio signal processing method and audio signal processing apparatus
JP2002191099A (en) 2000-09-26 2002-07-05 Matsushita Electric Ind Co Ltd Signal processor
US6928168B2 (en) * 2001-01-19 2005-08-09 Nokia Corporation Transparent stereo widening algorithm for loudspeakers
JP2002262385A (en) 2001-02-27 2002-09-13 Victor Co Of Japan Ltd Generating method for sound image localization signal, and acoustic image localization signal generator
US7079658B2 (en) * 2001-06-14 2006-07-18 Ati Technologies, Inc. System and method for localization of sounds in three-dimensional space
JP3435156B2 (en) * 2001-07-19 2003-08-11 松下電器産業株式会社 Sound image localization device
US6557736B1 (en) * 2002-01-18 2003-05-06 Heiner Ophardt Pivoting piston head for pump
AUPS278402A0 (en) * 2002-06-06 2002-06-27 Interactive Communications Closest point algorithm for off-axis near-field radiation calculation
US7529788B2 (en) 2002-10-21 2009-05-05 Neuro Solution Corp. Digital filter design method and device, digital filter design program, and digital filter
TW200408813A (en) 2002-10-21 2004-06-01 Neuro Solution Corp Digital filter design method and device, digital filter design program, and digital filter
FR2847376B1 (en) * 2002-11-19 2005-02-04 France Telecom METHOD FOR PROCESSING SOUND DATA AND SOUND ACQUISITION DEVICE USING THE SAME
DK1320281T3 (en) * 2003-03-07 2013-11-04 Phonak Ag Binaural hearing aid and method for controlling such a hearing aid
EP1320281B1 (en) 2003-03-07 2013-08-07 Phonak Ag Binaural hearing device and method for controlling such a hearing device
DE10344638A1 (en) * 2003-08-04 2005-03-10 Fraunhofer Ges Forschung Generation, storage or processing device and method for representation of audio scene involves use of audio signal processing circuit and display device and may use film soundtrack
US7680289B2 (en) 2003-11-04 2010-03-16 Texas Instruments Incorporated Binaural sound localization using a formant-type cascade of resonators and anti-resonators
US7949141B2 (en) 2003-11-12 2011-05-24 Dolby Laboratories Licensing Corporation Processing audio signals with head related transfer function filters and a reverberator
US7451093B2 (en) * 2004-04-29 2008-11-11 Srs Labs, Inc. Systems and methods of remotely enabling sound enhancement techniques
US20050273324A1 (en) * 2004-06-08 2005-12-08 Expamedia, Inc. System for providing audio data and providing method thereof
KR100725818B1 (en) 2004-07-14 2007-06-11 삼성전자주식회사 Sound reproducing apparatus and method for providing virtual sound source
WO2007033150A1 (en) 2005-09-13 2007-03-22 Srs Labs, Inc. Systems and methods for audio processing
WO2007123788A2 (en) 2006-04-03 2007-11-01 Srs Labs, Inc. Audio signal processing
WO2008035275A2 (en) 2006-09-18 2008-03-27 Koninklijke Philips Electronics N.V. Encoding and decoding of audio objects
US20100029490A1 (en) 2006-09-21 2010-02-04 Koninklijke Philips Electronics N.V. Ink-jet device and method for producing a biological assay substrate using a printing head and means for accelerated motion
WO2008084436A1 (en) 2007-01-10 2008-07-17 Koninklijke Philips Electronics N.V. An object-oriented audio decoder
US20090237564A1 (en) * 2008-03-18 2009-09-24 Invism, Inc. Interactive immersive virtual reality and simulation
EP2194527A3 (en) * 2008-12-02 2013-09-25 Electronics and Telecommunications Research Institute Apparatus for generating and playing object based audio contents

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9084050B2 (en) * 2013-07-12 2015-07-14 Elwha Llc Systems and methods for remapping an audio range to a human perceivable range

Also Published As

Publication number Publication date
KR101304797B1 (en) 2013-09-05
CN101263739B (en) 2012-06-20
KR20080049741A (en) 2008-06-04
US8027477B2 (en) 2011-09-27
EP1938661A1 (en) 2008-07-02
JP2009508442A (en) 2009-02-26
US20120014528A1 (en) 2012-01-19
PL1938661T3 (en) 2014-10-31
JP4927848B2 (en) 2012-05-09
CN101263739A (en) 2008-09-10
WO2007033150A1 (en) 2007-03-22
US9232319B2 (en) 2016-01-05
CA2621175C (en) 2015-12-22
US20070061026A1 (en) 2007-03-15
EP1938661B1 (en) 2014-04-02

Similar Documents

Publication Publication Date Title
CA2621175A1 (en) Systems and methods for audio processing
CN104335606B (en) Stereo widening over arbitrarily-configured loudspeakers
KR101827032B1 (en) Stereo image widening system
CN103329571B (en) Immersion audio presentation systems
EP1680941B1 (en) Multi-channel audio surround sound from front located loudspeakers
CN102972047B (en) Method and apparatus for reproducing stereophonic sound
US11516616B2 (en) System for and method of generating an audio image
EP2550813A1 (en) Multichannel sound reproduction method and device
KR20180135973A (en) Method and apparatus for audio signal processing for binaural rendering
EP2153695A2 (en) Early reflection method for enhanced externalization
US7197151B1 (en) Method of improving 3D sound reproduction
Kendall et al. Why things don't work: what you need to know about spatial audio
US20200059750A1 (en) Sound spatialization method
CN102547550A (en) Audio system, audio signal processing device and method, and program
US9794717B2 (en) Audio signal processing apparatus and audio signal processing method
CN106465032B (en) The apparatus and method for manipulating input audio signal
GB2337676A (en) Modifying filter implementing HRTF for virtual sound
Ahrens et al. Gentle acoustic crosstalk cancelation using the spectral division method and ambiophonics
US20220328054A1 (en) Audio system height channel up-mixing
Zotter et al. Low-frequency trick to improve externalization with non-individual HRIRs
Gan et al. Elevated speaker projection for digital home entertainment system
Avendano Virtual spatial sound
JPH0775439B2 (en) 3D sound field playback device
Eargle Two-Channel Stereo
Gan et al. ELEVATED sPEAKER PROJECToN FoR DiarraL Hon= ENTERTAINMENT svsrɛ

Legal Events

Date Code Title Description
EEER Examination request