| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Basically now this just relies on being able to specify composited buffers.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This will be to allow buffer layering, multiple buffers of the same format and
sample rate that are mixed together prior to resampling, filtering, and
panning. This will allow composing sounds from individual components that can
be swapped around on different invocations (e.g. layer SoundA and SoundB on one
instance and SoundA and SoundC on a different instance for a slightly different
sound, then just SoundA for a third instance, and so on). The longest buffer
within the list item determines the length of the list item.
More work needs to be done to fully support it, namely the ability to specity
multiple buffers to layer for static and streaming sources. Also the behavior
of loop points for layered static sources should be worked out. Should also
consider allowing each layer to have a sample offset.
|
|
|
|
|
|
|
| |
This improves the transition width, allowing more of the higher frequencies
remain audible. It would be preferrable to have an upper limit of 32 points
instead of 48, to reduce the overall table size and the CPU cost for down-
sampling.
|
| |
|
|
|
|
| |
Also rename the resampler functions to remove the unnecessary '32' token.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This is a bit more efficient than calling the normal HRTF mixing function
twice, and helps solve the problem of the values generated from convolution not
being consistent with the new HRIR.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This greatly improves HRTF performance since the dual-mix only applies to the
64-sample coefficient transition. So rather than doubling the full mix, it only
doubles 64 samples out of the full mix.
|
|
|
|
|
| |
Unfortunately it can't skip mixing the fade in when going to silence because
the history needs to be up to date.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This removes the need to access a couple more source fields in the mixer, and
also makes the looping and queue fields non-atomic.
|
|
|
|
|
|
|
|
|
|
| |
This is intended to do conversions for interleaved samples, and supports
changing from one DevFmtType to another as well as resampling. It does not
handle remixing channels.
The mixer is more optimized to use the resampling functions directly. However,
this should prove useful for recording with certain backends that won't do the
conversion themselves.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
This should cut down on unnecessary quantization noise (however minor) for 8-
and 16-bit samples. Unfortunately a power-of-2 multiple can't be used as easily
for converting float samples to integer, due to integer types having a non-
power-of-2 maximum amplitude (it'd require more per-sample clamping).
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This improves fading between HRIRs as sources pan around. In particular, it
improves the issue with individual coefficients having various rounding errors
in the stepping values, as well as issues with interpolating delay values.
It does this by doing two mixing passes for each source. First using the last
coefficients that fade to silence, and then again using the new coefficients
that fade from silence. When added together, it creates a linear fade from one
to the other. Additionally, the gain is applied separately so the individual
coefficients don't step with rounding errors. Although this does increase CPU
cost since it's doing two mixes per source, each mix is a bit cheaper now since
the stepping is simplified to a single gain value, and the overall quality is
improved.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NFC filters currently only work when rendering to ambisonic buffers, which
includes HQ rendering and ambisonic output. There are two new config options:
'decoder/nfc' (default on) enables or disables use of NFC filters globally, and
'decoder/nfc-ref-delay' (default 0) specifies the reference delay parameter for
NFC-HOA rendering with ambisonic output (a value of 0 disables NFC).
Currently, NFC filters rely on having an appropriate value set for
AL_METERS_PER_UNIT to get the correct scaling. HQ rendering uses the averaged
speaker distances as a control/reference, and currently doesn't correct for
individual speaker distances (if the speakers are all equidistant, this is
fine, otherwise per-speaker correction should be done as well).
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This has a couple behavioral changes. First and biggest is that querying
AL_BUFFERS_PROCESSED from a source will always return all buffers processed
when in an AL_STOPPED state. Previously all buffers would be set as processed
when first becoming stopped, but newly queued buffers would *not* be indicated
as processed. That old behavior was not compliant with the spec, which
unequivocally states "On a source in the AL_STOPPED state, all buffers are
processed."
Secondly, querying AL_BUFFER on an AL_STREAMING source will now always return
0. Previously it would return the current "active" buffer in the queue, but
there's no basis for that in the spec.
|
| |
|
| |
|
|
|
|
|
| |
Perf shows less than 1 percent CPU difference from the higher quality bsinc
resampler, but uses almost twice as much memory (a 128KB lookup table).
|
|
|
|
|
| |
This places the Send[] array at the end of the struct, making it easier to
handle dynamically.
|