| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This necessitates a change in how source updates are handled. Rather than just
being able to update sources when a dependent object state is changed (e.g. a
listener gain change), now all source updates must be proactively provided.
Consequently, apps that do not utilize any deferring (AL_SOFT_defer_updates or
alcSuspendContext/alcProcessContext) may utilize more CPU since it'll be
filling out more update containers for the mixer thread to use.
The upside is that there's less blocking between the app's calling thread and
the mixer thread, particularly for vectors and other multi-value properties
(filters and sends). Deferring behavior when used is also improved, since
updates that shouldn't be applied yet are simply not provided. And when they
are provided, the mixer doesn't have to ignore them, meaning the actual
deferring of a context doesn't have to synchrnously force an update -- the
process call will send any pending updates, which the mixer will apply even if
another deferral occurs before the mixer runs, because it'll still be there
waiting on the next mixer invocation.
There is one slight bug introduced by this commit. When a listener change is
made, or changes to multiple sources while updates are being deferred, it is
possible for the mixer to run while the sources are prepping their updates,
causing some of the source updates to be seen before the other. This will be
fixed in short order.
|
| |
|
| |
|
|
|
|
|
|
| |
Instead of looping over all the coefficients for each channel with multiplies,
when we know only one will have a non-0 factor for ambisonic mixing buffers,
just index the one with a non-0 factor.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This uses a virtual B-Format buffer for mixing, and then uses a dual-band
decoder for improved positional quality. This currently only works with first-
order output since first-order input (from the AL_EXT_BFROMAT extension) would
not sound correct when fed through a second- or third-order decoder.
This also does not currently implement near-field compensation since near-field
rendering effects are not implemented.
|
| |
|
|
|
|
|
| |
This means we track the current params and the target params, rather than the
target params and the stepping. This closer matches the non-HRTF mixers.
|
| |
|
| |
|
|
|
|
|
|
| |
This mixes to a 4-channel first-order ambisonics buffer. With ACN ordering and
N3D scaling, this makes it easy to remain compatible with effects that only
care about mono input since channel 0 is an unattenuated mono signal.
|
| |
|
|
|
|
|
|
| |
This helps the stability of transforms to local space for sources that are at
or near the listener. With a single-precision matrix, even FLT_EPSILON might
not be enough to detect matching positions.
|
|
|
|
|
|
|
|
| |
This is essentially a 12-point sinc resampler, unless it's resampling to a rate
higher than the output, at which point it will vary between 12 and 24 points
and do anti-aliasing to avoid/reduce frequencies going over nyquist.
Code provided by Christopher Fitzgerald.
|
|
|
|
| |
Also better handle the peaking filter gain.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This helps avoid different results when looping is toggled within a couple
samples of the loop point, or when a processed buffer is removed while the
source is only a couple samples into the next buffer.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Note that this is the multiple above the device sample rate, rather than the
source property limit. It could theoretically be increased to 511 by testing
against UINT_MAX instead of INT_MAX, since the increment and positions are
using unsigned integers. I'm just being paranoid about overflows.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The sound localization with virtual channel mixing was just too poor, so while
it's more costly to do per-source HRTF mixing, it's unavoidable if you want
good localization.
This is only partially reverted because having the virtual channel is still
beneficial, particularly with B-Format rendering and effect mixing which
otherwise skip HRTF processing. As before, the number of virtual channels can
potentially be customized, specifying more or less channels depending on the
system's needs.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new method mixes sources normally into a 14-channel buffer with the
channels placed all around the listener. HRTF is then applied to the channels
given their positions and written to a 2-channel buffer, which gets written out
to the device.
This method has the benefit that HRTF processing becomes more scalable. The
costly HRTF filters are applied to the 14-channel buffer after the mix is done,
turning it into a post-process with a fixed overhead. Mixing sources is done
with normal non-HRTF methods, so increasing the number of playing sources only
incurs normal mixing costs.
Another benefit is that it improves B-Format playback since the soundfield gets
mixed into speakers covering all three dimensions, which then get filtered
based on their locations.
The main downside to this is that the spatial resolution of the HRTF dataset
does not play a big role anymore. However, the hope is that with ambisonics-
based panning, the perceptual position of panned sounds will still be good. It
is also an option to increase the number of virtual channels for systems that
can handle it, or maybe even decrease it for weaker systems.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Currently missing the AL_ORIENTATION source property. Gain stepping also does
not work.
|
| |
|