| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
This is a bit more efficient than calling the normal HRTF mixing function
twice, and helps solve the problem of the values generated from convolution not
being consistent with the new HRIR.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This improves fading between HRIRs as sources pan around. In particular, it
improves the issue with individual coefficients having various rounding errors
in the stepping values, as well as issues with interpolating delay values.
It does this by doing two mixing passes for each source. First using the last
coefficients that fade to silence, and then again using the new coefficients
that fade from silence. When added together, it creates a linear fade from one
to the other. Additionally, the gain is applied separately so the individual
coefficients don't step with rounding errors. Although this does increase CPU
cost since it's doing two mixes per source, each mix is a bit cheaper now since
the stepping is simplified to a single gain value, and the overall quality is
improved.
|
| |
|
|
|
|
|
|
| |
Unsigned 32-bit offsets actually have some potential overhead on 64-bit targets
for pointer/array accesses due to rules on integer wrapping. No idea how much
impact it has in practice, but it's nice to be correct about it.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This means we track the current params and the target params, rather than the
target params and the stepping. This closer matches the non-HRTF mixers.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The sound localization with virtual channel mixing was just too poor, so while
it's more costly to do per-source HRTF mixing, it's unavoidable if you want
good localization.
This is only partially reverted because having the virtual channel is still
beneficial, particularly with B-Format rendering and effect mixing which
otherwise skip HRTF processing. As before, the number of virtual channels can
potentially be customized, specifying more or less channels depending on the
system's needs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new method mixes sources normally into a 14-channel buffer with the
channels placed all around the listener. HRTF is then applied to the channels
given their positions and written to a 2-channel buffer, which gets written out
to the device.
This method has the benefit that HRTF processing becomes more scalable. The
costly HRTF filters are applied to the 14-channel buffer after the mix is done,
turning it into a post-process with a fixed overhead. Mixing sources is done
with normal non-HRTF methods, so increasing the number of playing sources only
incurs normal mixing costs.
Another benefit is that it improves B-Format playback since the soundfield gets
mixed into speakers covering all three dimensions, which then get filtered
based on their locations.
The main downside to this is that the spatial resolution of the HRTF dataset
does not play a big role anymore. However, the hope is that with ambisonics-
based panning, the perceptual position of panned sounds will still be good. It
is also an option to increase the number of virtual channels for systems that
can handle it, or maybe even decrease it for weaker systems.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
They are still there for auxiliary sends. However, they should go away soon
enough too, and then we won't have to mess around with calculating extra
"predictive" samples in the mixer.
|
|
|
|
|
| |
The coefficients (which control the volume and panning) already use stepping to
non-abruptly fade the mix.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fades the dry mixing gains using a logarithmic curve, which should produce
a smoother transition than a linear one. It functions similarly to a linear
fade except that
step = (target - current) / numsteps;
...
gain += step;
becomes
step = powf(target / current, 1.0f / numsteps);
...
gain *= step;
where 'target' and 'current' are clamped to a lower bound that is greater than
0 (which makes no sense on a logarithmic scale).
Consequently, the non-HRTF direct mixers do not do not feed into the click
removal and pending click buffers, as this per-sample fading would do an
adequate job of stopping clicks and pops caused by extreme gain changes. These
buffers should be removed shortly.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
This reverts commit 25b9c3d0c15e959d544f5d0ac7ea507ea5f6d69f.
Conflicts:
Alc/mixer_neon.c
Unfortunately this also undoes the Neon-enhanced ApplyCoeffsStep method.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This update allows for much more flexibility in the HRTF data. It also allows
for HRTF table file names to include "%r" to represent the device's playback
rate (e.g. if you set hrtf-%r.mhr, then it will try to use hrtf-44100.mhr or
hrtf-48000.mhr depending if the device's output rate is 44100 or 48000,
respectively).
The makehrtf utility has also been updated to support more options and input
file formats, as well as the new mhr format.
|
| |
|
| |
|
| |
|
| |
|