aboutsummaryrefslogtreecommitdiffstats
path: root/Alc/ALu.c
Commit message (Collapse)AuthorAgeFilesLines
* Set XYZ channel gains for source sends to 0Chris Robinson2015-10-231-80/+107
| | | | | It's cleaner to just set the gains to 0 rather than to special-case B-Format in the mixer.
* Use one send gain per buffer channelChris Robinson2015-10-231-11/+16
|
* Return the new vector result from aluMatrixVectorChris Robinson2015-10-221-14/+12
|
* Remove the MIDI codeChris Robinson2015-10-201-3/+0
| | | | | | | The extension's not going anywhere, and it can't do anything fluidsynth can't. The code maintenance and bloat is not worth keeping around, and ideally the AL API would be able to facilitate MIDI-like behavior anyway (envelopes, start-at- time, etc).
* Round the calculated stepping valueChris Robinson2015-10-151-10/+2
|
* Replace the resample_fir6 declaration with resample_fir8Chris Robinson2015-10-121-1/+1
|
* Implement a 6-point sinc-lanczos filterChris Robinson2015-09-291-0/+1
|
* Fix resample_fir4 link errorAaron Jacobs2015-09-291-1/+1
|
* Increase the max pitch to 255Chris Robinson2015-09-261-3/+0
| | | | | | | Note that this is the multiple above the device sample rate, rather than the source property limit. It could theoretically be increased to 511 by testing against UINT_MAX instead of INT_MAX, since the increment and positions are using unsigned integers. I'm just being paranoid about overflows.
* Remove unneeded clampingChris Robinson2015-09-241-24/+16
|
* Use N3D scaling instead of FuMaChris Robinson2015-09-231-7/+7
|
* Fix updating listener params when forcing updatesChris Robinson2015-09-181-32/+40
|
* Update properties and clear wet buffers before mixing/processingChris Robinson2015-09-131-52/+69
|
* Explicitly convert to int in the aluF2I/S/B functionsChris Robinson2015-09-071-9/+16
|
* Use ACN ordering for ambisonics coefficients arraysChris Robinson2015-08-281-4/+8
| | | | | | Note that it still uses FuMa scalings internally. Coefficients loaded from config files specify if they're FuMa (in both ordering and scaling) or N3D, and will get reordered or rescaled as needed.
* Avoid temporary vector objectsChris Robinson2015-08-241-9/+13
|
* Revert "Fix B-Format rotation"Chris Robinson2015-08-241-3/+3
| | | | | | | This reverts commit 7ffb9b3056ab280d5d9408fd023f3cfb370ed103. It was behaving as appropriate before (orienting left did pan it left for the listener), I was apparently just misinterpreting the matrix.
* Fix B-Format rotationChris Robinson2015-08-231-3/+3
| | | | | | | The rotation erroneously specified the orientation of the source relative to the sound field, whereas it should be the orientation of the sound field *and* source relative to the listener. So now when the source is oriented left, the front of the sound field is to the left of the listener.
* Change source radius behaviorChris Robinson2015-07-051-19/+33
| | | | | | | | | For sources with a non-0 radius: When distance <= radius, factor = distance/radius*0.5 When distance > radius, factor = 1 - asinf(radius/distance)/PI Also, avoid using Position after calculating the localized direction and distance.
* Update a couple commentsChris Robinson2015-07-041-3/+4
|
* Add an option for "basic" HRTF renderingChris Robinson2015-02-111-2/+2
| | | | | | | | | | | | | | | | | This method is intended to help development by easily testing the quality of the B-Format encode and B-Format-to-HRTF decode. When used with HRTF, all sources are renderer using the virtual B-Format output, rather than just B-Format sources. Despite the CPU cost savings (only four channels need to be filtered with HRTF, while sources all render normally), the spatial acuity offered by the B-Format output is pretty poor since it's only first-order ambisonics, so "full" HRTF rendering is definitely preferred. It's /possible/ for some systems to be edge cases that prefer the CPU cost savings provided by basic over the sharper localization provided by full, and you do still get 3D positional cues, but this is unlikely to be an actual use- case in practice.
* Use a single statement to declare the buffer format channel mapsChris Robinson2015-02-101-14/+9
|
* Use B-Format for HRTF's virtual output formatChris Robinson2015-02-091-4/+4
| | | | | | | | This adds the ability to directly decode B-Format with HRTF, though only first- order (WXYZ) for now. Second- and third-order would be easilly doable, however we'd need to be able to up-mix first-order content (from the BFORMAT2D and BFORMAT3D buffer formats) since it would be inappropriate to decode lower-order content with a higher-order decoder.
* Properly handle a mono output buffer with the MIDI synthsChris Robinson2015-02-091-1/+1
|
* Add a comment detailing how the HRTF channel buffer is set upChris Robinson2015-02-091-19/+45
|
* Move HRTF params and state closer togetherChris Robinson2015-02-091-10/+10
|
* Cast to the pointer-to-type to increment the bufferChris Robinson2014-12-211-1/+1
|
* Avoid duplicate calculationsChris Robinson2014-12-181-4/+4
|
* Use aluVector and aluMatrix in a couple more placesChris Robinson2014-12-161-45/+30
|
* Pass a vectory to aluMatrixVectorChris Robinson2014-12-161-20/+23
|
* Use aluVector in some more placesChris Robinson2014-12-161-37/+30
|
* Add explicit matrix and vector types to operate withChris Robinson2014-12-161-40/+37
|
* Use a lookup table to do cubic resamplingChris Robinson2014-12-151-1/+1
|
* Don't pass float literals for unsigned intsChris Robinson2014-12-061-2/+2
|
* Remove IrSize from DirectParamsChris Robinson2014-11-291-2/+0
|
* Remove an unnecessary maxf()Chris Robinson2014-11-291-1/+1
|
* Shorten a couple linesChris Robinson2014-11-251-3/+3
|
* Use linear gain steppingChris Robinson2014-11-251-20/+14
|
* Pass the step count to the Update*Stepping methodsChris Robinson2014-11-251-25/+32
|
* Fix __ALSOFT_REVERSE_Z with non-HRTF outputChris Robinson2014-11-251-23/+21
|
* Make CalcHrtfDelta more genericChris Robinson2014-11-241-4/+39
|
* Move the voice's last position and gain out of the Hrtf containerChris Robinson2014-11-241-10/+10
|
* Use a macro to reduce code duplicationChris Robinson2014-11-231-14/+12
|
* Partially revert "Use a different method for HRTF mixing"Chris Robinson2014-11-231-2/+104
| | | | | | | | | | | | The sound localization with virtual channel mixing was just too poor, so while it's more costly to do per-source HRTF mixing, it's unavoidable if you want good localization. This is only partially reverted because having the virtual channel is still beneficial, particularly with B-Format rendering and effect mixing which otherwise skip HRTF processing. As before, the number of virtual channels can potentially be customized, specifying more or less channels depending on the system's needs.
* Rename Voice's NumChannels to OutChannelsChris Robinson2014-11-221-6/+6
|
* Only update the necessary channelsChris Robinson2014-11-221-2/+2
|
* Mix DirectChannel sources to the non-virtual channel buffersChris Robinson2014-11-221-1/+18
|
* Store the number of output channels in the voiceChris Robinson2014-11-221-0/+2
|
* Remove an unnecessary union containerChris Robinson2014-11-221-6/+6
|
* Use a different method for HRTF mixingChris Robinson2014-11-221-111/+52
| | | | | | | | | | | | | | | | | | | | | | | This new method mixes sources normally into a 14-channel buffer with the channels placed all around the listener. HRTF is then applied to the channels given their positions and written to a 2-channel buffer, which gets written out to the device. This method has the benefit that HRTF processing becomes more scalable. The costly HRTF filters are applied to the 14-channel buffer after the mix is done, turning it into a post-process with a fixed overhead. Mixing sources is done with normal non-HRTF methods, so increasing the number of playing sources only incurs normal mixing costs. Another benefit is that it improves B-Format playback since the soundfield gets mixed into speakers covering all three dimensions, which then get filtered based on their locations. The main downside to this is that the spatial resolution of the HRTF dataset does not play a big role anymore. However, the hope is that with ambisonics- based panning, the perceptual position of panned sounds will still be good. It is also an option to increase the number of virtual channels for systems that can handle it, or maybe even decrease it for weaker systems.