Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

De-XHRify BaseAudioContext examples #30414

Merged
merged 1 commit into from
Nov 21, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 22 additions & 32 deletions files/en-us/web/api/baseaudiocontext/createconvolver/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,46 +32,36 @@ A {{domxref("ConvolverNode")}}.

## Examples

The following example shows basic usage of an AudioContext to create a convolver node.
The basic premise is that you create an AudioBuffer containing a sound sample to be used
as an ambience to shape the convolution (called the _impulse response_,) and
### Creating a convolver node

The following example shows how to use an AudioContext to create a convolver node.
You create an {{domxref("AudioBuffer")}} containing a sound sample to be used
as an ambience to shape the convolution (called the _impulse response_) and
apply that to the convolver. The example below uses a short sample of a concert hall
crowd, so the reverb effect applied is really deep and echoey.

For more complete applied examples/information, check out our [Voice-change-O-matic](https://github.com/mdn/webaudio-examples/tree/main/voice-change-o-matic) demo (see [app.js lines 108–193](https://github.com/mdn/webaudio-examples/blob/main/voice-change-o-matic/scripts/app.js#L108-L193) for relevant code).
For more complete applied examples/information, check out our [Voice-change-O-matic](https://mdn.github.io/webaudio-examples/voice-change-o-matic/) demo (see [app.js](https://github.com/mdn/webaudio-examples/blob/main/voice-change-o-matic/scripts/app.js) for the code that is excerpted below).

```js
const audioCtx = new (window.AudioContext || window.webkitAudioContext)();
const convolver = audioCtx.createConvolver();

// …

// grab audio track via XHR for convolver node

let soundSource, concertHallBuffer;
const audioCtx = new AudioContext();
// ...

ajaxRequest = new XMLHttpRequest();
ajaxRequest.open("GET", "concert-crowd.ogg", true);
ajaxRequest.responseType = "arraybuffer";
const convolver = audioCtx.createConvolver();
// ...

ajaxRequest.onload = () => {
const audioData = ajaxRequest.response;
audioCtx.decodeAudioData(
audioData,
(buffer) => {
concertHallBuffer = buffer;
soundSource = audioCtx.createBufferSource();
soundSource.buffer = concertHallBuffer;
},
(e) => console.error(`Error with decoding audio data: ${e.err}`),
// Grab audio track via fetch() for convolver node
try {
const response = await fetch(
"https://mdn.github.io/voice-change-o-matic/audio/concert-crowd.ogg",
);
};

ajaxRequest.send();

// …

convolver.buffer = concertHallBuffer;
const arrayBuffer = await response.arrayBuffer();
const decodedAudio = await audioCtx.decodeAudioData(arrayBuffer);
convolver.buffer = decodedAudio;
} catch (error) {
console.error(
`Unable to fetch the audio file: ${name} Error: ${err.message}`,
);
}
```

## Specifications
Expand Down
122 changes: 57 additions & 65 deletions files/en-us/web/api/baseaudiocontext/createscriptprocessor/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,90 +57,82 @@ A {{domxref("ScriptProcessorNode")}}.

## Examples

The following example shows basic usage of a `ScriptProcessorNode` to take a
track loaded via {{domxref("BaseAudioContext/decodeAudioData", "AudioContext.decodeAudioData()")}}, process it, adding a bit
of white noise to each audio sample of the input track (buffer) and play it through the
{{domxref("AudioDestinationNode")}}. For each channel and each sample frame, the
`scriptNode.onaudioprocess` function takes the associated
`audioProcessingEvent` and uses it to loop through each channel of the input
buffer, and each sample in each channel, and add a small amount of white noise, before
setting that result to be the output sample in each case.

> **Note:** For a full working example, see our [script-processor-node](https://mdn.github.io/webaudio-examples/script-processor-node/)
> GitHub repo. (You can also access the [source code](https://github.com/mdn/webaudio-examples/blob/master/script-processor-node/index.html).)
### Adding white noise using a script processor

The following example shows how to use a `ScriptProcessorNode` to take a track loaded via {{domxref("BaseAudioContext/decodeAudioData", "AudioContext.decodeAudioData()")}}, process it, adding a bit of white noise to each audio sample of the input track, and play it through the {{domxref("AudioDestinationNode")}}.

For each channel and each sample frame, the script node's {{domxref("ScriptProcessorNode.audioprocess_event", "audioprocess")}} event handler uses the associated `audioProcessingEvent` to loop through each channel of the input buffer, and each sample in each channel, and add a small amount of white noise, before setting that result to be the output sample in each case.

> **Note:** You can [run the full example live](https://mdn.github.io/webaudio-examples/script-processor-node/), or [view the source](https://github.com/mdn/webaudio-examples/blob/master/script-processor-node/).

```js
const myScript = document.querySelector("script");
const myPre = document.querySelector("pre");
const playButton = document.querySelector("button");

// Create AudioContext and buffer source
const audioCtx = new AudioContext();
const source = audioCtx.createBufferSource();

// Create a ScriptProcessorNode with a bufferSize of 4096 and a single input and output channel
const scriptNode = audioCtx.createScriptProcessor(4096, 1, 1);
console.log(scriptNode.bufferSize);

// load in an audio track via XHR and decodeAudioData

function getData() {
request = new XMLHttpRequest();
request.open("GET", "viper.ogg", true);
request.responseType = "arraybuffer";
request.onload = () => {
const audioData = request.response;

audioCtx.decodeAudioData(
audioData,
(buffer) => {
myBuffer = buffer;
source.buffer = myBuffer;
},
(e) => console.error(`Error with decoding audio data: ${e.err}`),
let audioCtx;

async function init() {
audioCtx = new AudioContext();
const source = audioCtx.createBufferSource();

// Create a ScriptProcessorNode with a bufferSize of 4096 and
// a single input and output channel
const scriptNode = audioCtx.createScriptProcessor(4096, 1, 1);

// Load in an audio track using fetch() and decodeAudioData()
try {
const response = await fetch("viper.ogg");
const arrayBuffer = await response.arrayBuffer();
source.buffer = await audioCtx.decodeAudioData(arrayBuffer);
} catch (err) {
console.error(
`Unable to fetch the audio file: ${name} Error: ${err.message}`,
);
};
request.send();
}
}

// Give the node a function to process audio events
scriptNode.onaudioprocess = (audioProcessingEvent) => {
// The input buffer is the song we loaded earlier
const inputBuffer = audioProcessingEvent.inputBuffer;
// Give the node a function to process audio events
scriptNode.addEventListener("audioprocess", (audioProcessingEvent) => {
// The input buffer is the song we loaded earlier
let inputBuffer = audioProcessingEvent.inputBuffer;

// The output buffer contains the samples that will be modified and played
const outputBuffer = audioProcessingEvent.outputBuffer;
// The output buffer contains the samples that will be modified and played
let outputBuffer = audioProcessingEvent.outputBuffer;

// Loop through the output channels (in this case there is only one)
for (let channel = 0; channel < outputBuffer.numberOfChannels; channel++) {
const inputData = inputBuffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
// Loop through the output channels (in this case there is only one)
for (let channel = 0; channel < outputBuffer.numberOfChannels; channel++) {
let inputData = inputBuffer.getChannelData(channel);
let outputData = outputBuffer.getChannelData(channel);

// Loop through the 4096 samples
for (let sample = 0; sample < inputBuffer.length; sample++) {
// make output equal to the same as the input
outputData[sample] = inputData[sample];
// Loop through the 4096 samples
for (let sample = 0; sample < inputBuffer.length; sample++) {
// make output equal to the same as the input
outputData[sample] = inputData[sample];

// add noise to each output sample
outputData[sample] += (Math.random() * 2 - 1) * 0.2;
// add noise to each output sample
outputData[sample] += (Math.random() * 2 - 1) * 0.1;
}
}
}
};

getData();
});

// Wire up the play button
playButton.onclick = () => {
source.connect(scriptNode);
scriptNode.connect(audioCtx.destination);
source.start();
};

// When the buffer source stops playing, disconnect everything
source.onended = () => {
source.disconnect(scriptNode);
scriptNode.disconnect(audioCtx.destination);
};
// When the buffer source stops playing, disconnect everything
source.addEventListener("ended", () => {
source.disconnect(scriptNode);
scriptNode.disconnect(audioCtx.destination);
});
}

// wire up play button
playButton.addEventListener("click", () => {
if (!audioCtx) {
init();
}
});
```

## Specifications
Expand Down
89 changes: 32 additions & 57 deletions files/en-us/web/api/baseaudiocontext/decodeaudiodata/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,80 +58,55 @@ In this section we will first cover the promise-based syntax and then the callba

### Promise-based syntax

In this example `fetchData()` uses {{domxref("fetch()")}} to retrieve an audio
file asynchronously and decodes it into an {{domxref("AudioBuffer")}}. It then caches the
`audioBuffer` in the global `buffer` variable for later playback.
In this example `loadAudio()` uses {{domxref("fetch()")}} to retrieve an audio file and decodes it into an {{domxref("AudioBuffer")}}. It then caches the `audioBuffer` in the global `buffer` variable for later playback.

> **Note:** This example is based on a fully functioning web page that you can [run live](https://mdn.github.io/webaudio-examples/decode-audio-promise/). The complete source code is [here](https://github.com/mdn/webaudio-examples/tree/master/decode-audio-promise).
> **Note:** You can [run the full example live](https://mdn.github.io/webaudio-examples/decode-audio-data/promise/), or [view the source](https://github.com/mdn/webaudio-examples/blob/master/decode-audio-data/promise/).

```js
const audioCtx = new AudioContext();
let audioCtx;
let buffer;
let source;

fetchAudio("viper").then((buf) => {
// executes when buffer has been decoded
buffer = buf;
});

// fetchAudio() returns a Promise
// it uses fetch() to load an audio file
// it uses decodeAudioData to decode it into an AudioBuffer
// decoded AudioBuffer is buf argument for Promise.then((buf) => {})
// play.onclick() creates a single-use AudioBufferSourceNode
async function fetchAudio(name) {
async function loadAudio() {
try {
let rsvp = await fetch(`${name}.mp3`);
return audioCtx.decodeAudioData(await rsvp.arrayBuffer()); // returns a Promise, buffer is arg for .then((arg) => {})
// Load an audio file
const response = await fetch("viper.mp3");
// Decode it
buffer = await audioCtx.decodeAudioData(await response.arrayBuffer());
} catch (err) {
console.log(
`Unable to fetch the audio file: ${name} Error: ${err.message}`,
);
console.error(`Unable to fetch the audio file. Error: ${err.message}`);
}
}
```

### Callback syntax

In this example `getAudio()` uses XHR to load an audio track.
It sets the `responseType` of the request to `arraybuffer` so that
it returns an array buffer as its `response`. It caches the the array buffer
in the local `audioData` variable in the XHR `onload` event handler, then
passes it to `decodeAudioData()`. The success callback caches the decoded
{{domxref("AudioBuffer")}} in the global `buffer` variable for later playback.
In this example `loadAudio()` uses {{domxref("fetch()")}} to retrieve an audio
file and decodes it into an {{domxref("AudioBuffer")}} using the callback-based version of `decodeAudioData()`. In the callback, it plays the decoded buffer.

> **Note:** You can [run the example live](https://mdn.github.io/webaudio-examples/decode-audio-data/) and access the [source code](https://github.com/mdn/webaudio-examples/tree/master/decode-audio-data).
> **Note:** You can [run the full example live](https://mdn.github.io/webaudio-examples/decode-audio-data/callback/), or [view the source](https://github.com/mdn/webaudio-examples/blob/master/decode-audio-data/callback/).

```js
const audioCtx = new AudioContext();
let buffer;
let audioCtx;
let source;

function playBuffer(buffer) {
source = audioCtx.createBufferSource();
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
source.start();
}

getAudio("viper");

// getAudio() has no return value
// it uses XHR to load an audio file
// it uses decodeAudioData to decode it into an AudioBuffer
// decoded AudioBuffer is buf argument to callback function
// play.onclick() creates a single-use AudioBufferSourceNode
function getAudio(name) {
request = new XMLHttpRequest();
request.open("GET", `${name}.mp3`, true);
request.responseType = "arraybuffer";
request.onload = () => {
let audioData = request.response;
audioCtx.decodeAudioData(
audioData,
(buf) => {
// executes when buffer has been decoded
buffer = buf;
},
(err) => {
console.error(
`Unable to get the audio file: ${name} Error: ${err.message}`,
);
},
);
};
request.send();
async function loadAudio() {
try {
// Load an audio file
const response = await fetch("viper.mp3");
// Decode it
audioCtx.decodeAudioData(await response.arrayBuffer(), playBuffer);
} catch (err) {
console.error(`Unable to fetch the audio file. Error: ${err.message}`);
}
}
```

Expand Down