aboutsummaryrefslogtreecommitdiffstats
path: root/Alc/ALu.c
Commit message (Collapse)AuthorAgeFilesLines
* Properly handle a mono output buffer with the MIDI synthsChris Robinson2015-02-091-1/+1
|
* Add a comment detailing how the HRTF channel buffer is set upChris Robinson2015-02-091-19/+45
|
* Move HRTF params and state closer togetherChris Robinson2015-02-091-10/+10
|
* Cast to the pointer-to-type to increment the bufferChris Robinson2014-12-211-1/+1
|
* Avoid duplicate calculationsChris Robinson2014-12-181-4/+4
|
* Use aluVector and aluMatrix in a couple more placesChris Robinson2014-12-161-45/+30
|
* Pass a vectory to aluMatrixVectorChris Robinson2014-12-161-20/+23
|
* Use aluVector in some more placesChris Robinson2014-12-161-37/+30
|
* Add explicit matrix and vector types to operate withChris Robinson2014-12-161-40/+37
|
* Use a lookup table to do cubic resamplingChris Robinson2014-12-151-1/+1
|
* Don't pass float literals for unsigned intsChris Robinson2014-12-061-2/+2
|
* Remove IrSize from DirectParamsChris Robinson2014-11-291-2/+0
|
* Remove an unnecessary maxf()Chris Robinson2014-11-291-1/+1
|
* Shorten a couple linesChris Robinson2014-11-251-3/+3
|
* Use linear gain steppingChris Robinson2014-11-251-20/+14
|
* Pass the step count to the Update*Stepping methodsChris Robinson2014-11-251-25/+32
|
* Fix __ALSOFT_REVERSE_Z with non-HRTF outputChris Robinson2014-11-251-23/+21
|
* Make CalcHrtfDelta more genericChris Robinson2014-11-241-4/+39
|
* Move the voice's last position and gain out of the Hrtf containerChris Robinson2014-11-241-10/+10
|
* Use a macro to reduce code duplicationChris Robinson2014-11-231-14/+12
|
* Partially revert "Use a different method for HRTF mixing"Chris Robinson2014-11-231-2/+104
| | | | | | | | | | | | The sound localization with virtual channel mixing was just too poor, so while it's more costly to do per-source HRTF mixing, it's unavoidable if you want good localization. This is only partially reverted because having the virtual channel is still beneficial, particularly with B-Format rendering and effect mixing which otherwise skip HRTF processing. As before, the number of virtual channels can potentially be customized, specifying more or less channels depending on the system's needs.
* Rename Voice's NumChannels to OutChannelsChris Robinson2014-11-221-6/+6
|
* Only update the necessary channelsChris Robinson2014-11-221-2/+2
|
* Mix DirectChannel sources to the non-virtual channel buffersChris Robinson2014-11-221-1/+18
|
* Store the number of output channels in the voiceChris Robinson2014-11-221-0/+2
|
* Remove an unnecessary union containerChris Robinson2014-11-221-6/+6
|
* Use a different method for HRTF mixingChris Robinson2014-11-221-111/+52
| | | | | | | | | | | | | | | | | | | | | | | This new method mixes sources normally into a 14-channel buffer with the channels placed all around the listener. HRTF is then applied to the channels given their positions and written to a 2-channel buffer, which gets written out to the device. This method has the benefit that HRTF processing becomes more scalable. The costly HRTF filters are applied to the 14-channel buffer after the mix is done, turning it into a post-process with a fixed overhead. Mixing sources is done with normal non-HRTF methods, so increasing the number of playing sources only incurs normal mixing costs. Another benefit is that it improves B-Format playback since the soundfield gets mixed into speakers covering all three dimensions, which then get filtered based on their locations. The main downside to this is that the spatial resolution of the HRTF dataset does not play a big role anymore. However, the hope is that with ambisonics- based panning, the perceptual position of panned sounds will still be good. It is also an option to increase the number of virtual channels for systems that can handle it, or maybe even decrease it for weaker systems.
* Allocate the DryBuffer dynamicallyChris Robinson2014-11-211-1/+1
|
* Rename a couple parametersChris Robinson2014-11-071-3/+3
|
* Pas the output device channel count to ALeffectState::processChris Robinson2014-11-071-2/+2
|
* Rename speakers to channels, and remove an old incorrect commentChris Robinson2014-11-071-14/+14
|
* Use a separate macro for the max output channel countChris Robinson2014-11-071-11/+11
|
* Fix 5.1 surround soundChris Robinson2014-11-071-2/+2
| | | | | | | | | | | | | Apparently, 5.1 surround sound is supposed to use the "side" channels, not the back channels, and we've been wrong this whole time. That means the "5.1 Side" is actually the correct 5.1 setup, and using the back channels is anomalous. Additionally, this means the 5.1 buffer format should also use the the side channels instead of the back channels. A final note: the 5.1 mixing coefficients are changed so both use the original 5.1 surround sound set (with the surround channels at +/-110 degrees). So the only difference now between 5.1 "side" and 5.1 "back" is the channel labels.
* Play zero-distance/zero-radius sources from the frontChris Robinson2014-11-051-4/+4
|
* Don't use FrontLeft and FrontRight to reference the dry bufferChris Robinson2014-11-051-4/+4
|
* Don't increment the output buffer in the Write_ methodsChris Robinson2014-11-051-13/+17
|
* Set gains using the device channel indexChris Robinson2014-11-051-16/+10
|
* Use a method to set omni-directional channel gainsChris Robinson2014-11-041-1/+4
|
* Add some missing breaksChris Robinson2014-11-021-0/+2
|
* Avoid the ALCdevice_Lock/Unlock wrapper in some placesChris Robinson2014-11-011-2/+3
|
* Support B-Format source rotation with AL_ORIENTATIONChris Robinson2014-10-311-1/+42
|
* Rename the source's Orientation to DirectionChris Robinson2014-10-311-3/+3
|
* Add preliminary AL_EXT_BFORMAT supportChris Robinson2014-10-311-1/+32
| | | | | Currently missing the AL_ORIENTATION source property. Gain stepping also does not work.
* Don't attempt to match a channel input to outputChris Robinson2014-10-121-24/+7
| | | | | | | | | I don't like this, but it's currently necessary. The problem is that the ambisonics-based panning does not maintain consistent energy output, which causes sounds mapped directly to an output channel to be louder compared to when being panned. The inconcistent energy output is partly by design, as it's trying to render a full 3D sound field and at least attempts to correct for imbalanced speaker layouts.
* Avoid taking the square-root of the ambient gainChris Robinson2014-10-111-21/+10
| | | | | | Although it is more correct for preserving the apparent volume, the ambisonics- based panning does not work on the same power scale, making it louder by comparison.
* Add a helper to search for a channel index by nameChris Robinson2014-10-021-10/+4
|
* Make ComputeAngleGains use ComputeDirectionalGainsChris Robinson2014-10-021-54/+65
|
* Use helpers to set the gain step valuesChris Robinson2014-10-021-142/+73
|
* Add a cast for MSVCChris Robinson2014-09-301-1/+1
|
* Use an ambisonics-based panning methodChris Robinson2014-09-301-15/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For mono sources, third-order ambisonics is utilized to generate panning gains. The general idea is that a panned mono sound can be encoded into b-format ambisonics as: w[i] = sample[i] * 0.7071; x[i] = sample[i] * dir[0]; y[i] = sample[i] * dir[1]; ... and subsequently rendered using: output[chan][i] = w[i] * w_coeffs[chan] + x[i] * x_coeffs[chan] + y[i] * y_coeffs[chan] + ...; By reordering the math, channel gains can be generated by doing: gain[chan] = 0.7071 * w_coeffs[chan] + dir[0] * x_coeffs[chan] + dir[1] * y_coeffs[chan] + ...; which then get applied as normal: output[chan][i] = sample[i] * gain[chan]; One of the reasons to use ambisonics for panning is that it provides arguably better reproduction for sounds emanating from between two speakers. As well, this makes it easier to pan in all 3 dimensions, with for instance a "3D7.1" or 8-channel cube speaker configuration by simply providing the necessary coefficients (this will need some work since some methods still use angle-based panpot, particularly multi-channel sources). Unfortunately, the math to reliably generate the coefficients for a given speaker configuration is too costly to do at run-time. They have to be pre- generated based on a pre-specified speaker arangement, which means the config options for tweaking speaker angles are no longer supportable. Eventually I hope to provide config options for custom coefficients, which can either be generated and written in manually, or via alsoft-config from user-specified speaker positions. The current default set of coefficients were generated using the MATLAB scripts (compatible with GNU Octave) from the excellent Ambisonic Decoder Toolbox, at https://bitbucket.org/ambidecodertoolbox/adt/