| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
Some data sets are just too sparse, having noticeably few measurements to
properly handle slowly panning sources. Although not perfect, bilinearly
interpolating the HRIR measurements improves the positional accuracy.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This will make it easier to handle HRTF data sets that have separate left and
right ear responses. Will need an mhr version update to take advantage of that.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Currently only applies to external files, rather than embedded datasets. Also,
HRTFs aren't unloaded after being loaded, until library shutdown.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This improves fading between HRIRs as sources pan around. In particular, it
improves the issue with individual coefficients having various rounding errors
in the stepping values, as well as issues with interpolating delay values.
It does this by doing two mixing passes for each source. First using the last
coefficients that fade to silence, and then again using the new coefficients
that fade from silence. When added together, it creates a linear fade from one
to the other. Additionally, the gain is applied separately so the individual
coefficients don't step with rounding errors. Although this does increase CPU
cost since it's doing two mixes per source, each mix is a bit cheaper now since
the stepping is simplified to a single gain value, and the overall quality is
improved.
|
| |
|
|
|
|
|
| |
This keeps the decoder matrices and coefficient mapping together for if it
changes in the future.
|
| |
|
|
|
|
|
|
| |
Unsigned 32-bit offsets actually have some potential overhead on 64-bit targets
for pointer/array accesses due to rules on integer wrapping. No idea how much
impact it has in practice, but it's nice to be correct about it.
|
|
|
|
|
|
| |
This should improve positional quality for relatively low cost. Full HRTF
rendering still only uses first-order since the only use of the dry buffer
there is for first-order content (B-Format buffers, effects).
|
|
|
|
| |
These should better represent the pseudo-inverse matrices with N3D scaling.
|
|
|
|
|
|
| |
It's hard to tell which is ultimately better, although this way does make the
FOA output somewhat louder which will help when it's combined with direct HRTF
rendering.
|
|
|
|
|
|
|
|
| |
Use an empty source file to build a stub object file, instead of /dev/null. Use
_mh_dylib_header to retrieve the data on 10.7+, instead of _mh_execute_header.
And shorten the names to fit in the 16-character limit.
Thanks to Anna Cheremnykh for the fixes!
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original pseudo-inverse method that generated the LF matrix expects the
high frequencies to be scaled up by ~2.645751 over the low frequencies (or
sqrt(7), ~8.45dB). However, the AllRAD method used to generate the HF matrix
produced a matrix that was only scaled up by 1.46551981258 (based on the
average of the W coefficients).
Previously, the LF matrix was scaled down by sqrt(7), as the difference
specified in the pseudo-inverse results. This failed to account for the
increase already present in the HF matrix, so now the LF matrix is scaled down
by the remaining difference between the expected scaling and the scaling
already present in the HF matrix (sqrt(7) / 1.46551981258 = 1.80533302205, or
roughly 5.13dB, where the reciprocal is 0.553914423 for -5.13 dB).
|
|
|
|
|
|
|
|
| |
It still fades between HRIRs when it changes, but now it selects the nearest
one instead of blending the nearest four. Due to the minimum-phase nature of
the HRIRs, interpolating between delays lead to some oddities which are
exasperated by the fading (and the fading is needed to avoid clicks and pops,
and smooth out changes).
|
|
|
|
|
|
|
| |
This uses an AllRAD-derived decoder matrix for the high frequencies, which
seems to improve positioning response. It also switches back to dual-band.
The low frequencies appear to be unexpectedly quiet by comparison, but it's not
that bad and can be tweaked later.
|
| |
|
|
|
|
|
|
|
| |
Since it's accumulating multiple HRIRs for two output speakers, it seems to be
a better option to preserve the amplitude of the high-frequency decoder instead
of increasing it, and reduce the amplitude of the low-frequency decoder to
compensate.
|
|
|
|
|
| |
14 in total, an 8-point cube and a 6-point diamond shape, to help improve sound
localization a bit. Incurs no real extra CPU cost once the IRs are built.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Use single-band processing for now, to see if dual-band is causing a drop in
quality at all.
|
|
|
|
|
| |
This allows each HRIR to contribute a frequency-dependent response, essentially
acting like a dual-band decoder playing over the cube speaker array.
|
|
|
|
|
|
|
| |
The CalcEvIndices and CalcAzIndices methods were dependent on the FPU being in
round-to-zero mode, which is not the case for panning initialization. And since
we just need the closest index and don't need to lerp between them, it's better
to just directly calculate the index with rounding.
|
|
|
|
|
|
|
|
|
|
|
| |
Using all the HRIRs seems to have problems with volume balancing, due in part
to HRTF data sets not having uniform enough measurements for a simple decoder
matrix to work (and generating a proper one that would work better is not that
easy). This still maintains the benefits of decoding ambisonics directly to
HRTF, namely that it only needs to filter the 4 ambisonic channels and can use
more optimized HRTF filtering methods on those channels. It can also be
improved further with frequency-dependent processing baked into the generated
coefficients, incurring no extra run-time cost for it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Last time this attempted to average the HRIRs according to their contribution
to a given B-Format channel as if they were loudspeakers, as well as averaging
the HRIR delays. The latter part resulted in the loss of the ITD (inter-aural
time delay), a key component of HRTF.
This time, the HRIRs are averaged similar to above, except instead of averaging
the delays, they're applied to the resulting coefficients (for example, a delay
of 8 would apply the HRIR starting at the 8th sample of the target HRIR). This
does roughly double the IR length, as the largest delay is about 35 samples
while the filter is normally 32 samples. However, this is still smaller the
original data set IR (which was 256 samples), it also only needs to be applied
to 4 channels for first-order ambisonics, rather than the 8-channel cube. So
it's doing twice as much work per sample, but only working on half the number
of samples.
Additionally, since the resulting HRIRs no longer rely on an extra delay line,
a more efficient HRTF mixing function can be made that doesn't use one. Such a
function can also avoid the per-sample stepping parameters the original uses.
|
|
|
|
|
|
|
| |
Previously, if an HRTF file was loaded it would not only skip loading it, but
it would also skip adding it to the output enumeration list. Now it properly
skips loading it when it's already loaded, but still adds it to the enumeration
list if it's not already in it.
|
| |
|
| |
|
| |
|