| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
Unfortunately mmdevapi does not do channel remixing or resampling, even for
capture, so the device can only be opened in the mode it's configured for.
For now, fallback to dsound or winmm to get the conversion until we can do it
ourselves.
|
| |
|
|
|
|
| |
Spotted by Xavier Bouchoux.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Would be nicer to have a more backend-agnostic method of doing this, Perhaps
even also only when the router is being used.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This is to avoid any issues with device names that can be case-sensitive, and
strcasecmp not working properly for non-ASCII-7 uppercase characters.
|
| |
|
|
|
|
|
| |
Like mmdevapi. duplicate device names will have a '#2' or such appended, so the
device's reported name may be incorrect.
|
| |
|
|
|
|
|
| |
Duplicate device names will have a '#2' or such appended, so the device's
reported name may be incorrect.
|
|
|
|
| |
Since some devices may have it appended.
|
| |
|
|
|
|
|
|
| |
Note that it still uses FuMa scalings internally. Coefficients loaded from
config files specify if they're FuMa (in both ordering and scaling) or N3D,
and will get reordered or rescaled as needed.
|
| |
|
|
|
|
|
|
|
| |
This reverts commit 7ffb9b3056ab280d5d9408fd023f3cfb370ed103.
It was behaving as appropriate before (orienting left did pan it left for the
listener), I was apparently just misinterpreting the matrix.
|
|
|
|
|
|
|
| |
The rotation erroneously specified the orientation of the source relative to
the sound field, whereas it should be the orientation of the sound field *and*
source relative to the listener. So now when the source is oriented left, the
front of the sound field is to the left of the listener.
|
| |
|
|
|
|
|
|
|
| |
It seems a simple scaling on the coefficients will allow first-order content to
work with second- and third-order coefficients, although obviously not with any
improved locality. That may be something to look into for the future, but this
is good enough for now.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
ALC_FALSE now indicates explicitly no HRTF mixing, while ALC_DONT_CARE_SOFT
is autodetect.
|
|
|
|
|
|
|
|
|
| |
For sources with a non-0 radius:
When distance <= radius, factor = distance/radius*0.5
When distance > radius, factor = 1 - asinf(radius/distance)/PI
Also, avoid using Position after calculating the localized direction and
distance.
|
| |
|
| |
|
|
|
|
|
| |
The backend's capture funcs are already called while under a lock, so multiple
threads shouldn't be able to read from it at once.
|
|
|
|
|
| |
This isn't a real solution, but it should get IAudioClient_IsFormatSupported to
stop failing.
|
| |
|
| |
|
|
|
|
|
|
|
| |
This basically acts as if the app created a new context with the specified
attributes (causing the device to reset with new parameters), then immediately
delete it. Existing contexts remain undisturbed, except for a temporary pause
while the device output is reconfigured.
|
|
|
|
|
|
|
|
|
| |
DISABLED - Generic disabled status
ENABLED - Generic enabled status
DENIED - Not allowed (user has configured HRTF to be off)
REQUIRED - Forced (user has forced HRTF to be used)
HEADPHONES_DETECTED - Enabled because headphones were detected
UNSUPPORTED_FORMAT - Device format is not compatible with available filters
|
|
|
|
|
| |
This can report the status of HRTF, specifying if it's enabled or not and why
(currently only reports unsupported formats, but this may be extended).
|
| |
|
| |
|
|
|
|
|
| |
And limit it to first-order again, since there will likely need to be extra
scalings applied.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This method is intended to help development by easily testing the quality of
the B-Format encode and B-Format-to-HRTF decode. When used with HRTF, all
sources are renderer using the virtual B-Format output, rather than just
B-Format sources.
Despite the CPU cost savings (only four channels need to be filtered with HRTF,
while sources all render normally), the spatial acuity offered by the B-Format
output is pretty poor since it's only first-order ambisonics, so "full" HRTF
rendering is definitely preferred.
It's /possible/ for some systems to be edge cases that prefer the CPU cost
savings provided by basic over the sharper localization provided by full, and
you do still get 3D positional cues, but this is unlikely to be an actual use-
case in practice.
|