-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Relation of Audio Device Client and AudioContext #5
Comments
This is one area where I disagree with the current API. The API shouldn't know about AudioContext; it makes AudioContext a first-class citizen. I think I'd prefer AudioDevicieClient not know anything about AudioContexts. Instead, you create an ADC instance and do something like I think for situations like this the ADC callback is still called and all the audio from ADC and AudioContext are merged together to produce the final output. |
The reason I proposed The spec change on Web Audio API side will be really simple as well. This might be better than a getter pattern because everything settles down at the construction time. partial dictionary AudioContextOptions {
AudioDeviceClient deviceClient;
} One constraint is: if Probably there are more corner cases, so we'll have to keep looking. |
Notes from telecon on 4/4/2019: The group has two sets of thoughts on this issue.
|
To capture the original proposal, here's an example of pass-through between ADC and AC. /* AudioDeviceClientGlobalScope */
const deviceCallback = (input, output, contextCallback) {
// This callback will automagically handle buffer size difference between
// ADC (user-defined) and AC (128).
contextCallback(input, output);
} Then the |input| above will be delivered to const context = myDeviceClient.getContext();
context.source.connect(context.destination); |
Some thoughts: One of the most important functionalities that we want is to be able to invoke Allowing to map multiple setDeviceCallback((input, output, contextCallbacks) => {
contextCallbacks[0](input, output);
contextCallbacks[1](input, output);
// and so on...
}); Theoretically this is possible, but not sure what can be accomplished with the expense of complexity. Here are some difficult questions:
In all, this is why I prefer to have 1:1 relationship between AudioContext and AudioDeviceClient. The
|
A quick summary of the F2F meeting. I understand the use case now and having the context as a part of ADC makes total sense. There was some question about how important that use case is, but unless there's a good reason not to allow this, I'm fine with having the AudioContext in ADC. The use case is that having the context in ADC allows the ADC process decide to mix the ADC code with the AudioContext to allow fine-grained control. |
Am I correct, when I assume that there is a 1:1 relation between an Audio Device Client and an AudioContext? Each Audio Device Client can only have one AudioContext and each AudioContext can only belong to one Audio Device Client. But an AudioContext is optional and an Audio Device Client can exist without an AudioContext associated to it.
If my assumption is correct, I think it would make sense to flip around the process of creating an AudioContext associated to an Audio Device Client. If there is a
getContext()
method on the Audio Device Client, it implies that the context is already there and just has to be returned. If the AudioContext would just be created as usual but with an additional constructor argument it might better reflect what is actually going on behind the scenes.The text was updated successfully, but these errors were encountered: