aboutsummaryrefslogtreecommitdiffstats
path: root/Alc/hrtf.h
Commit message (Collapse)AuthorAgeFilesLines
* Calculate HRTF coefficients for all B-Format channels at onceChris Robinson2015-02-101-1/+1
| | | | | | It's possible to calculate HRTF coefficients for full third-order ambisonics now, but it's still not possible to use them here without upmixing first-order content.
* Pass the (FuMa) channel number to GetBFormatHrtfCoeffsChris Robinson2015-02-101-1/+1
|
* Use B-Format for HRTF's virtual output formatChris Robinson2015-02-091-0/+1
| | | | | | | | This adds the ability to directly decode B-Format with HRTF, though only first- order (WXYZ) for now. Second- and third-order would be easilly doable, however we'd need to be able to up-mix first-order content (from the BFORMAT2D and BFORMAT3D buffer formats) since it would be inappropriate to decode lower-order content with a higher-order decoder.
* Make CalcHrtfDelta more genericChris Robinson2014-11-241-1/+0
|
* Partially revert "Use a different method for HRTF mixing"Chris Robinson2014-11-231-1/+3
| | | | | | | | | | | | The sound localization with virtual channel mixing was just too poor, so while it's more costly to do per-source HRTF mixing, it's unavoidable if you want good localization. This is only partially reverted because having the virtual channel is still beneficial, particularly with B-Format rendering and effect mixing which otherwise skip HRTF processing. As before, the number of virtual channels can potentially be customized, specifying more or less channels depending on the system's needs.
* Use a different method for HRTF mixingChris Robinson2014-11-221-3/+1
| | | | | | | | | | | | | | | | | | | | | | | This new method mixes sources normally into a 14-channel buffer with the channels placed all around the listener. HRTF is then applied to the channels given their positions and written to a 2-channel buffer, which gets written out to the device. This method has the benefit that HRTF processing becomes more scalable. The costly HRTF filters are applied to the 14-channel buffer after the mix is done, turning it into a post-process with a fixed overhead. Mixing sources is done with normal non-HRTF methods, so increasing the number of playing sources only incurs normal mixing costs. Another benefit is that it improves B-Format playback since the soundfield gets mixed into speakers covering all three dimensions, which then get filtered based on their locations. The main downside to this is that the spatial resolution of the HRTF dataset does not play a big role anymore. However, the hope is that with ambisonics- based panning, the perceptual position of panned sounds will still be good. It is also an option to increase the number of virtual channels for systems that can handle it, or maybe even decrease it for weaker systems.
* Add a source radius property that determines the directionality of a soundChris Robinson2014-07-111-2/+2
| | | | | | | | | At 0 distance from the listener, the sound is omni-directional. As the source and listener become 'radius' units apart, the sound becomes more directional. With HRTF, an omni-directional sound is handled using 0-delay, pass-through filter coefficients, which is blended with the real delay and coefficients as needed to become more directional.
* Don't pass the device to HRTF methodsChris Robinson2014-06-201-2/+2
|
* Move HRTF macros and function declarations to a separate headerChris Robinson2014-02-231-0/+28