Skip to content

Aggregation of multiples audio I/O devices #4

@hoch

Description

@hoch

The one of key advantages of ADC is a single callback function for input and ouptut. This is possible by combining input and output streams and serving them to user. As shown in the example, user can specify two different IDs for input and output respectively.

const constraints = {
  inputDeviceId: inputId,
  outputDeviceId: outputId
};

const client = await navigator.mediaDevices.getAudioDeviceClient(constraints);

It is common that two devices are physically separated. (i.e. different clocks, sample rate and threads) To serve these isolated streams, the system needs to re-clock/sample the audio data before sending them to a callback function. This is so-called "device aggregation" in ADC.

Problem 1. The scope of aggregation

  1. The aggregation should only include 1 input and 1 output devices.
  2. The aggregation should be free-for-all. (multi-input and multi-output)

For the option 2 (which is quite similar to MacOS's aggregate device), we can think of something like this:

const constraints = {
  inputDeviceId: [inputId0, inputId1, inputId2],
  outputDeviceId: [outputId0, outputId2],
};

Problem 2. The configurability of aggregation layer

The aggregation by the system will be involved with many parameters; resampling quality, options for reclocker, speed/quality trade off and etc. Should ADC expose these options at all? Or should we just say this is up to UA? Or should this be somewhere in the middle?

NOTE: @padenot mentioned in TPAC 2018 that FireFox uses this "re-clocking" mechanism to aggregate and align audio data from multiple devices.

Metadata

Metadata

Assignees

Labels

AudioDeviceClientProject label for AudioDeviceClient

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions