| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Since the wet path is essentially the room response to a sound, the direction
of the sound to the listener doesn't change the amount of energy the room
receives. Instead, the surface area defined by the cones dictate the volume the
room gets for the sound.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This fixes a potential missed state change if an update with a new state got
replaced with one that doesn't.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This necessitates a change in how source updates are handled. Rather than just
being able to update sources when a dependent object state is changed (e.g. a
listener gain change), now all source updates must be proactively provided.
Consequently, apps that do not utilize any deferring (AL_SOFT_defer_updates or
alcSuspendContext/alcProcessContext) may utilize more CPU since it'll be
filling out more update containers for the mixer thread to use.
The upside is that there's less blocking between the app's calling thread and
the mixer thread, particularly for vectors and other multi-value properties
(filters and sends). Deferring behavior when used is also improved, since
updates that shouldn't be applied yet are simply not provided. And when they
are provided, the mixer doesn't have to ignore them, meaning the actual
deferring of a context doesn't have to synchrnously force an update -- the
process call will send any pending updates, which the mixer will apply even if
another deferral occurs before the mixer runs, because it'll still be there
waiting on the next mixer invocation.
There is one slight bug introduced by this commit. When a listener change is
made, or changes to multiple sources while updates are being deferred, it is
possible for the mixer to run while the sources are prepping their updates,
causing some of the source updates to be seen before the other. This will be
fixed in short order.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to the listener, separate containers are provided atomically for the
mixer thread to apply updates without needing to block, and a free-list is used
to reuse container objects.
A couple things to note. First, the lock is still used when the effect state's
deviceUpdate method is called to prevent asynchronous calls to reset the device
from interfering. This can be fixed by using the list lock in ALc.c instead.
Secondly, old effect states aren't immediately deleted when the effect type
changes (the actual type, not just its properties). This is because the mixer
thread is intended to be real-time safe, and so can't be freeing anything. They
are cleared away when updates reuse the container they were kept in, and they
don't incur any extra processing cost, but there may be cases where the memory
is kept around until the effect slot is deleted.
|
|
|
|
|
|
|
|
|
|
|
| |
This uses a separate container to provide the relevant properties to the
internal update method, using atomic pointer swaps. A free-list is used to
avoid having too many individual containers.
This allows the mixer to update the internal listener properties without
requiring the lock to protect against async updates. It also allows concurrent
read access to the user-facing property values, even the multi-value ones (e.g.
the vectors).
|
| |
|
|
|
|
|
| |
This helps ensure async listener/context property changes affect all playing
sources at the same time.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Instead of looping over all the coefficients for each channel with multiplies,
when we know only one will have a non-0 factor for ambisonic mixing buffers,
just index the one with a non-0 factor.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This uses a virtual B-Format buffer for mixing, and then uses a dual-band
decoder for improved positional quality. This currently only works with first-
order output since first-order input (from the AL_EXT_BFROMAT extension) would
not sound correct when fed through a second- or third-order decoder.
This also does not currently implement near-field compensation since near-field
rendering effects are not implemented.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
There were phase issues caused by applying HRTF directly to the B-Format
channels, since the HRIR delays were all averaged which removed the inter-aural
time-delay, which in turn removed significant spatial information.
|
|
|
|
|
| |
This means we track the current params and the target params, rather than the
target params and the stepping. This closer matches the non-HRTF mixers.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This mixes to a 4-channel first-order ambisonics buffer. With ACN ordering and
N3D scaling, this makes it easy to remain compatible with effects that only
care about mono input since channel 0 is an unattenuated mono signal.
|
| |
|