| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
The combined source and listener gains now can't exceed a multiplier of 16
(~24dB). This is to avoid mixes getting out of control with large volume
boosts, which reduces the effective precision given by floating-point.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This appears to be how Creative's Windows drivers handle it, and is necessary
for at least the Windows version of UT2k4 (otherwise it tries to play a source
while suspended, checks and sees it's stopped, then kills it before it's given
a chance to start playing).
Consequently, the internal properties it gets mixed with are determined by what
the source properties are at the time of the play call, and the listener
properties at the time of the suspend call.
This does not change alDeferUpdatesSOFT, which will still hold the play state
change until alProcessUpdatesSOFT.
|
|
|
|
|
|
|
|
| |
Note that this now also causes all playing sources to update when an effect
slot is updated. This is a bit wasteful, as it should only need to re-update
sources that are using the effect slot (and only when a relevant property is
changed), but it's good enough. Especially with deferring since all playing
sources are going to get updated on the process call anyway.
|
|
|
|
|
|
|
|
| |
This allows us to not have to play around with trying to avoid duplicate state
pointers, since the reference count will ensure they're deleted as appropriate.
The only caveat is that the mixer is not allowed to decrement references, since
that can cause the object to be freed (which the mixer code is not allowed to
do).
|
|
|
|
|
| |
This is mostly just reorganizing the effects to call the Construct method which
initializes the ref count.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
The source's voice holds a copy of the last properties it received, so listener
updates can make sources recalculate internal properties from that stored copy.
|
| |
|
|
|
|
|
| |
This allows each HRIR to contribute a frequency-dependent response, essentially
acting like a dual-band decoder playing over the cube speaker array.
|
|
|
|
|
|
|
| |
The CalcEvIndices and CalcAzIndices methods were dependent on the FPU being in
round-to-zero mode, which is not the case for panning initialization. And since
we just need the closest index and don't need to lerp between them, it's better
to just directly calculate the index with rounding.
|
|
|
|
|
|
|
|
|
|
|
| |
Using all the HRIRs seems to have problems with volume balancing, due in part
to HRTF data sets not having uniform enough measurements for a simple decoder
matrix to work (and generating a proper one that would work better is not that
easy). This still maintains the benefits of decoding ambisonics directly to
HRTF, namely that it only needs to filter the 4 ambisonic channels and can use
more optimized HRTF filtering methods on those channels. It can also be
improved further with frequency-dependent processing baked into the generated
coefficients, incurring no extra run-time cost for it.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Last time this attempted to average the HRIRs according to their contribution
to a given B-Format channel as if they were loudspeakers, as well as averaging
the HRIR delays. The latter part resulted in the loss of the ITD (inter-aural
time delay), a key component of HRTF.
This time, the HRIRs are averaged similar to above, except instead of averaging
the delays, they're applied to the resulting coefficients (for example, a delay
of 8 would apply the HRIR starting at the 8th sample of the target HRIR). This
does roughly double the IR length, as the largest delay is about 35 samples
while the filter is normally 32 samples. However, this is still smaller the
original data set IR (which was 256 samples), it also only needs to be applied
to 4 channels for first-order ambisonics, rather than the 8-channel cube. So
it's doing twice as much work per sample, but only working on half the number
of samples.
Additionally, since the resulting HRIRs no longer rely on an extra delay line,
a more efficient HRTF mixing function can be made that doesn't use one. Such a
function can also avoid the per-sample stepping parameters the original uses.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
JACK2 will print error messages to stderr if it fails to connect to a server.
Users who don't normally use JACK but have the client lib installed will get
those messages even though OpenAL Soft will continue on to find a working
backend without trouble. So to avoid it, set an error message handler that'll
log them as warnings.
This isn't that great because there's no way to tell whether the error messages
are due to the server not running, or some other problem. And it resets the
callback to the default afterward even if it may have been set to something
else before. JACK2, which is what needs this workaround in the first place,
doesn't export the jack_error_callback pointer to properly save and restore it.
|
| |
|
|
|
|
|
|
|
| |
Not that this really changes anything since the CoreAudio backend doesn't honor
the ALCdevice's buffer metrics, nor accurately report the device's actual
metrics. But it clears up warnings from a non-multiple-of-four update size if
the sample rate causes it to change.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Certain operations on the buffer queue depend on the loop state to behave
properly, so it should not be deferred until the async voice update occurs.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Currently incomplete, as second- and third-order output will not correctly
handle B-Format input buffers. A standalone up-sampler will be needed, similar
to the high-quality decoder.
Also, output is ACN ordering with SN3D normalization. A config option will
eventually be provided to change this if desired.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unfortunately on certain systems, the TLS callback is called in a restricted
context, and isn't allowed to access certain messaging sub-systems. Such sub-
systems may be used if the thread's context is freed, in turn freeing the
device, which it tries to close.
Ideally, the app shouldn't have tried to destroy a context while it was still
current on a thread, or even leave a context current on a thread that's being
destroyed,. So for now, release the context ref and print an ERR that it might
be leaked.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
It's been disabled forever, and I have no idea how to make it work properly.
Better to just redo it when making something that works.
|
| |
|
| |
|
|
|
|
|
| |
It's a horriobly inefficient way to process multiple samples through the
filter.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Previously, if an HRTF file was loaded it would not only skip loading it, but
it would also skip adding it to the output enumeration list. Now it properly
skips loading it when it's already loaded, but still adds it to the enumeration
list if it's not already in it.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
The decoders use a row of the HF decoder matrix followed by a row of the LF
decoder matrix, for each given output channel in turn. Packing the two matrices
accordingly results in less memory hopping.
|
| |
|
| |
|
| |
|
| |
|
| |
|