Releases: diffusionstudio/core
v1.1.1
- Implemented ability to insert clips into a stacked track at a given index e.g.
track.add(new Clip(), 2)
- Ensured clips within stacked tracks can be split without the track being reordered
Example
const composition = new core.Compostion();
const track = composition.createTrack('base').stacked();
await track.add(new core.Clip({ name: 'foo' });
await track.add(new core.Clip({ name: 'bar' });
// now add a clip in between
await track.add(new core.Clip({ name: 'pong' }), 1);
console.log(track.clips[0].name); // foo
console.log(track.clips[1].name); // pong
console.log(track.clips[2].name); // bar
v1.1.0
v1.0.1
v1.0.0
v1.0.0-rc.8
- added partial video loading support
- increased test coverage to >90%
v1.0.0-rc.6
Basic usage of the new opus encoder
import { CanvasEncoder } from '@diffusionstudio/core';
const encoder = new OpusEncoder({
output: (chunk, meta) => {
// mux
},
error: console.error,
});
await encoder.configure({
numberOfChannels: 1,
sampleRate: 48000,
});
encoder.encode({
data: new Int16Array(24000),
numberOfFrames: 24000,
});
The new opus encoder replaces the WebCodecs AudioEncoder enabling Diffusion Studio to run in all major browsers.
v1.0.0-rc.4
- Integrated Motion Canvas inspired animation API
Example
const text = await composition.add(
new core.TextClip({
text: 'Hello World',
position: 'center',
fontSize: 34
})
);
// begin with calling the animate method
text.animate()
.rotation(243).to(360 * 2, 15) // start at 243 deg, and animate to 2 * 360 deg in 15 frames
.scale(0.3).to(1, 10) // scale from 0.3 to 1 in 10 frames
v1.0.0-rc.3
- Improved render performance by ~24%
- Implemented new clip lifecycle with the following phases:
- constructor: invoked first, during the initialization of the clip. This is where the initial state and values should be set up.
- init: called asynchronously before the Clip is added to a track/composition.
- enter: triggered right before the Clip is drawn to the canvas.
- update: called on every redraw of the clip.
- exit: called after the Clip has been drawn to the canvas for the last time.
v1.0.0-rc.2
Functional Properties
We introduced functional properties for advance animations. Instead of a clip value/ Keyframe you can now assign a function.
Example
await composition.add(
new core.ImageClip(new File(...), {
x: (time: core.Timestamp) => time.seconds * 500,
y: (time: core.Timestamp) => time.seconds * 200,
})
);
When used in combination with regular functions, the this
context will be the Clip itself.
v1.0.0-rc.1
Some breaking changes were introduced with v1.0.0-rc.1
. Here is a migration guide:
appendTrack
has been renamed to shiftTrack
.
before:
const track = composition.appendTrack(VideoTrack);
after:
const track = composition.shiftTrack(VideoTrack);
appendClip
has been renamed to add
.
before:
const clip = await composition.appendClip(new Clip());
// when using tracks
const clip = await track.appendClip(new Clip());
after:
const clip = await composition.add(new Clip());
// when using tracks
const clip = await track.add(new Clip());
position
has been renamed to layer
on the track object
before:
const track = composition.appendTrack(VideoTrack).position('bottom');
after:
const track = composition.shiftTrack(VideoTrack).layer('bottom');
New Features
a new method for creating tracks has been introduced:
const track = composition.createTrack('video');
// equivalent to
const track = composition.shiftTrack(VideoTrack);
This enabled us to add a new new method to the MediaClip
for creating captions, which was previously not possible due to circular dependencies:
const audio = new AudioClip(new File(), { transcript: new Transcript() });
await composition.add(audio);
await audio.generateCaptions();
Note the
MediaClip
needs to be added to the composition for the generateCaptions method to be available.