Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

alternative audio data structures / storing ops instead of applying immediately #55

Open
dy opened this issue May 20, 2018 · 2 comments

Comments

@dy
Copy link
Member

dy commented May 20, 2018

Following audiojs/audio-buffer-list#5.

The current API approach is covered by a lot of similar components, it is destined to insignificant competition and questionable value. The main blocker and drawback is the core - audio-buffer-list component, which does not bring a lot of value, compared to just storing linked audio-buffers.

Alternately, audio could be focused on storing editing-in-process, rather than data wrapper with linear API, similar to XRay RGA.

Principle

  • storing operations rather than applying them to data
    • 👍 no precision loss
    • 👍 faster insertion/removal
    • 👍 allows for collaborative editing
    • 👍 allows for faster re/adjusting params of applied control/envelope
    • 👎 possibly somewhat slower playback due to applied transforms stack, hopefully having heavy-duty fx is not a part of editing process
      • ! possibly compiling fx program dynamically, akin to regl
      • ! pre-rendering audio for faster playback
  • undo/redo history methods store operations, not full binary replica every step
  • branching - allows for alternatives

Pros

  • 👍 makes audio unique
  • 👍 makes it suitable for editors
@dy
Copy link
Member Author

dy commented Jan 16, 2019

Reference structures:

In fact, git seems to be suitable for that too.

Note also that the class technically should allow to utilize any underlying technology: time series, STFT, formants, HPR, HPS SPS etc models (https://github.com/MTG/sms-tools/tree/master/software/models), wavelets etc.

👍 In the case of formants, for example, transforms are theoretically times faster than the time series.
👍 Abstract interface would discard sampleRate param and make Audio just a time-series data wrapper, with possibly even uncertain/irregular stops. We may want to engage a separate time-series structure for that, which seems to be broadly demanded, from animation/gradient/colormap stops to compact storing of observations.

@dy
Copy link
Member Author

dy commented May 15, 2019

Lifecycle

  1. Initialize data model
  2. Input data source
    • Convert input data source to target model
  3. Modify data source
    • Create stack of modifiers/reducers/transforms
    • Modifiers can possibly be applied real-time
  4. Play data source
    • Apply stack of transforms, play / apply transforms per-buffer
  5. Get stats
    • Should model include stat params straight ahead?
  6. Output data source
    • Apply stack of transforms, output

Plan

  • Collect set of concerns, use-cases of responsibility
  • Come up with ideal API covering all these cases
  • Create baseline/edge/real tests for the cases
  • Fix tests

Stores

  • time-series store
  • web-assembly store
  • stft store
  • harmonic model + residual store
  • formants store
  • wavelets store
  • see ref for other stores (Bercelona Institute)

@dy dy changed the title v3 alternative audio data structures / storing ops instead of applying immediately Jun 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant