Open
Description
As currently proposed, the ADC includes an AudioContext and provides a callback to call the audio graph and returns the data produced by the audio graph.
The ADC also can support different render sizes. Let's say 64 is the suggested HW size. How does that work with an AudioContext that must render 128 frames? Especially if there's an input to the AudioContext that is supposed to be generated from the ADC and sent to the AudioContext?
Perhaps the solution is to allow the AudioContext to work at different block sizes? Then it can match the optimum (or selected) value for the ADC.