aboutsummaryrefslogtreecommitdiffstats
path: root/OpenAL32/Include/alu.h
Commit message (Collapse)AuthorAgeFilesLines
* Avoid mixing all coefficients together when only some are usedChris Robinson2016-04-151-1/+1
|
* Avoid unnecessary loops for setting up effect slot b-format buffer mixingChris Robinson2016-04-141-0/+2
|
* Move the InitRenderer method to panning.cChris Robinson2016-04-141-3/+13
|
* Split aluInitPanning into separate functions for HRTF or UHJChris Robinson2016-04-141-0/+2
|
* Add config options to enable the hq ambisonic decoderChris Robinson2016-03-161-2/+1
|
* Add a dual-band ambisonic decoderChris Robinson2016-03-151-1/+2
| | | | | | | | | | This uses a virtual B-Format buffer for mixing, and then uses a dual-band decoder for improved positional quality. This currently only works with first- order output since first-order input (from the AL_EXT_BFROMAT extension) would not sound correct when fed through a second- or third-order decoder. This also does not currently implement near-field compensation since near-field rendering effects are not implemented.
* Use the real output's left and right channels with HRTFChris Robinson2016-03-111-2/+2
|
* Calculate HRTF stepping params right before mixingChris Robinson2016-02-141-7/+11
| | | | | This means we track the current params and the target params, rather than the target params and the stepping. This closer matches the non-HRTF mixers.
* Calculate channel gain stepping just before mixingChris Robinson2016-02-141-9/+11
|
* Rename ComputeBFormatGains to ComputeFirstOrderGainsChris Robinson2016-01-311-5/+5
|
* Mix to multichannel for effectsChris Robinson2016-01-281-4/+6
| | | | | | This mixes to a 4-channel first-order ambisonics buffer. With ACN ordering and N3D scaling, this makes it easy to remain compatible with effects that only care about mono input since channel 0 is an unattenuated mono signal.
* Separate calculating ambisonic coefficients from the panning gainsChris Robinson2016-01-251-14/+34
|
* Use doubles for the constructed listener matrixChris Robinson2015-11-111-12/+37
| | | | | | This helps the stability of transforms to local space for sources that are at or near the listener. With a single-precision matrix, even FLT_EPSILON might not be enough to detect matching positions.
* Implement a band-limited sinc resamplerChris Robinson2015-11-051-4/+30
| | | | | | | | This is essentially a 12-point sinc resampler, unless it's resampling to a rate higher than the output, at which point it will vary between 12 and 24 points and do anti-aliasing to avoid/reduce frequencies going over nyquist. Code provided by Christopher Fitzgerald.
* Pass in the Q parameter for setting the filter parametersChris Robinson2015-11-011-12/+1
| | | | Also better handle the peaking filter gain.
* Fix a commentChris Robinson2015-11-011-1/+1
|
* Use one send gain per buffer channelChris Robinson2015-10-231-1/+1
|
* Use a constant value for the post-position paddingChris Robinson2015-10-151-2/+5
|
* Store the source's previous samples with the voiceChris Robinson2015-10-151-0/+3
| | | | | | This helps avoid different results when looping is toggled within a couple samples of the loop point, or when a processed buffer is removed while the source is only a couple samples into the next buffer.
* Replace the sinc6 resampler with sinc8, and make SSE versionsChris Robinson2015-10-111-4/+5
|
* Implement a 6-point sinc-lanczos filterChris Robinson2015-09-291-2/+11
|
* Replace the cubic resampler with a 4-point sinc/lanczos filterChris Robinson2015-09-271-3/+3
|
* Don't keep selecting the mixer to useChris Robinson2015-09-271-1/+1
|
* Increase the max pitch to 255Chris Robinson2015-09-261-1/+1
| | | | | | | Note that this is the multiple above the device sample rate, rather than the source property limit. It could theoretically be increased to 511 by testing against UINT_MAX instead of INT_MAX, since the increment and positions are using unsigned integers. I'm just being paranoid about overflows.
* Fix updating listener params when forcing updatesChris Robinson2015-09-181-0/+2
|
* Rename F_2PI to F_TAUChris Robinson2015-09-131-1/+1
|
* Move HRTF params and state closer togetherChris Robinson2015-02-091-3/+3
|
* Add missing alignas to CubicLUT declarationChris Robinson2015-01-131-1/+1
|
* Remove some unnecessary restrict usesChris Robinson2014-12-241-7/+6
|
* Use aluVector and aluMatrix in a couple more placesChris Robinson2014-12-161-1/+1
|
* Add explicit matrix and vector types to operate withChris Robinson2014-12-161-0/+38
|
* Use a lookup table to do cubic resamplingChris Robinson2014-12-151-9/+9
|
* Transpose the cubic matrix opChris Robinson2014-12-151-6/+6
|
* Remove IrSize from DirectParamsChris Robinson2014-11-291-1/+0
|
* Move the voice's last position and gain out of the Hrtf containerChris Robinson2014-11-241-2/+3
|
* Partially revert "Use a different method for HRTF mixing"Chris Robinson2014-11-231-1/+9
| | | | | | | | | | | | The sound localization with virtual channel mixing was just too poor, so while it's more costly to do per-source HRTF mixing, it's unavoidable if you want good localization. This is only partially reverted because having the virtual channel is still beneficial, particularly with B-Format rendering and effect mixing which otherwise skip HRTF processing. As before, the number of virtual channels can potentially be customized, specifying more or less channels depending on the system's needs.
* Rename Voice's NumChannels to OutChannelsChris Robinson2014-11-221-1/+1
|
* Store the number of output channels in the voiceChris Robinson2014-11-221-0/+1
|
* Remove an unnecessary union containerChris Robinson2014-11-221-3/+1
|
* Use a different method for HRTF mixingChris Robinson2014-11-221-27/+1
| | | | | | | | | | | | | | | | | | | | | | | This new method mixes sources normally into a 14-channel buffer with the channels placed all around the listener. HRTF is then applied to the channels given their positions and written to a 2-channel buffer, which gets written out to the device. This method has the benefit that HRTF processing becomes more scalable. The costly HRTF filters are applied to the 14-channel buffer after the mix is done, turning it into a post-process with a fixed overhead. Mixing sources is done with normal non-HRTF methods, so increasing the number of playing sources only incurs normal mixing costs. Another benefit is that it improves B-Format playback since the soundfield gets mixed into speakers covering all three dimensions, which then get filtered based on their locations. The main downside to this is that the spatial resolution of the HRTF dataset does not play a big role anymore. However, the hope is that with ambisonics- based panning, the perceptual position of panned sounds will still be good. It is also an option to increase the number of virtual channels for systems that can handle it, or maybe even decrease it for weaker systems.
* Use a separate macro for the max output channel countChris Robinson2014-11-071-5/+5
|
* Use a method to set omni-directional channel gainsChris Robinson2014-11-041-14/+7
|
* Support B-Format source rotation with AL_ORIENTATIONChris Robinson2014-10-311-3/+4
|
* Add preliminary AL_EXT_BFORMAT supportChris Robinson2014-10-311-0/+8
| | | | | Currently missing the AL_ORIENTATION source property. Gain stepping also does not work.
* Make ComputeAngleGains use ComputeDirectionalGainsChris Robinson2014-10-021-5/+5
|
* Don't use ComputeAngleGains for SetGainsChris Robinson2014-10-021-1/+5
|
* Use an ambisonics-based panning methodChris Robinson2014-09-301-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For mono sources, third-order ambisonics is utilized to generate panning gains. The general idea is that a panned mono sound can be encoded into b-format ambisonics as: w[i] = sample[i] * 0.7071; x[i] = sample[i] * dir[0]; y[i] = sample[i] * dir[1]; ... and subsequently rendered using: output[chan][i] = w[i] * w_coeffs[chan] + x[i] * x_coeffs[chan] + y[i] * y_coeffs[chan] + ...; By reordering the math, channel gains can be generated by doing: gain[chan] = 0.7071 * w_coeffs[chan] + dir[0] * x_coeffs[chan] + dir[1] * y_coeffs[chan] + ...; which then get applied as normal: output[chan][i] = sample[i] * gain[chan]; One of the reasons to use ambisonics for panning is that it provides arguably better reproduction for sounds emanating from between two speakers. As well, this makes it easier to pan in all 3 dimensions, with for instance a "3D7.1" or 8-channel cube speaker configuration by simply providing the necessary coefficients (this will need some work since some methods still use angle-based panpot, particularly multi-channel sources). Unfortunately, the math to reliably generate the coefficients for a given speaker configuration is too costly to do at run-time. They have to be pre- generated based on a pre-specified speaker arangement, which means the config options for tweaking speaker angles are no longer supportable. Eventually I hope to provide config options for custom coefficients, which can either be generated and written in manually, or via alsoft-config from user-specified speaker positions. The current default set of coefficients were generated using the MATLAB scripts (compatible with GNU Octave) from the excellent Ambisonic Decoder Toolbox, at https://bitbucket.org/ambidecodertoolbox/adt/
* Rename activesource to voiceChris Robinson2014-08-211-4/+4
|
* Use a NULL source for inactive activesourcesChris Robinson2014-08-211-3/+7
| | | | Also only access the activesource's source field once per update.
* Combine the direct and send mixersChris Robinson2014-06-131-3/+3
|