| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
| |
The source's voice holds a copy of the last properties it received, so listener
updates can make sources recalculate internal properties from that stored copy.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Last time this attempted to average the HRIRs according to their contribution
to a given B-Format channel as if they were loudspeakers, as well as averaging
the HRIR delays. The latter part resulted in the loss of the ITD (inter-aural
time delay), a key component of HRTF.
This time, the HRIRs are averaged similar to above, except instead of averaging
the delays, they're applied to the resulting coefficients (for example, a delay
of 8 would apply the HRIR starting at the 8th sample of the target HRIR). This
does roughly double the IR length, as the largest delay is about 35 samples
while the filter is normally 32 samples. However, this is still smaller the
original data set IR (which was 256 samples), it also only needs to be applied
to 4 channels for first-order ambisonics, rather than the 8-channel cube. So
it's doing twice as much work per sample, but only working on half the number
of samples.
Additionally, since the resulting HRIRs no longer rely on an extra delay line,
a more efficient HRTF mixing function can be made that doesn't use one. Such a
function can also avoid the per-sample stepping parameters the original uses.
|
| |
|
|
|
|
|
| |
Certain operations on the buffer queue depend on the loop state to behave
properly, so it should not be deferred until the async voice update occurs.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Currently incomplete, as second- and third-order output will not correctly
handle B-Format input buffers. A standalone up-sampler will be needed, similar
to the high-quality decoder.
Also, output is ACN ordering with SN3D normalization. A config option will
eventually be provided to change this if desired.
|
| |
|
|
|
|
|
| |
It's been disabled forever, and I have no idea how to make it work properly.
Better to just redo it when making something that works.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
It will only be used with a cube channel setup, so there's no need to have one
for every possible output channel.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This helps protect against the device changing unexpectedly from multiple
threads, instead of using the global list/library lock.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This fixes a potential missed state change if an update with a new state got
replaced with one that doesn't.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This necessitates a change in how source updates are handled. Rather than just
being able to update sources when a dependent object state is changed (e.g. a
listener gain change), now all source updates must be proactively provided.
Consequently, apps that do not utilize any deferring (AL_SOFT_defer_updates or
alcSuspendContext/alcProcessContext) may utilize more CPU since it'll be
filling out more update containers for the mixer thread to use.
The upside is that there's less blocking between the app's calling thread and
the mixer thread, particularly for vectors and other multi-value properties
(filters and sends). Deferring behavior when used is also improved, since
updates that shouldn't be applied yet are simply not provided. And when they
are provided, the mixer doesn't have to ignore them, meaning the actual
deferring of a context doesn't have to synchrnously force an update -- the
process call will send any pending updates, which the mixer will apply even if
another deferral occurs before the mixer runs, because it'll still be there
waiting on the next mixer invocation.
There is one slight bug introduced by this commit. When a listener change is
made, or changes to multiple sources while updates are being deferred, it is
possible for the mixer to run while the sources are prepping their updates,
causing some of the source updates to be seen before the other. This will be
fixed in short order.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to the listener, separate containers are provided atomically for the
mixer thread to apply updates without needing to block, and a free-list is used
to reuse container objects.
A couple things to note. First, the lock is still used when the effect state's
deviceUpdate method is called to prevent asynchronous calls to reset the device
from interfering. This can be fixed by using the list lock in ALc.c instead.
Secondly, old effect states aren't immediately deleted when the effect type
changes (the actual type, not just its properties). This is because the mixer
thread is intended to be real-time safe, and so can't be freeing anything. They
are cleared away when updates reuse the container they were kept in, and they
don't incur any extra processing cost, but there may be cases where the memory
is kept around until the effect slot is deleted.
|
|
|
|
|
|
|
|
|
|
|
| |
This uses a separate container to provide the relevant properties to the
internal update method, using atomic pointer swaps. A free-list is used to
avoid having too many individual containers.
This allows the mixer to update the internal listener properties without
requiring the lock to protect against async updates. It also allows concurrent
read access to the user-facing property values, even the multi-value ones (e.g.
the vectors).
|
| |
|
| |
|
| |
|