Skip to content

Latest commit

 

History

History
65 lines (40 loc) · 4.57 KB

migration_guide.md

File metadata and controls

65 lines (40 loc) · 4.57 KB

Migration Guide

We are currently in process of testing v1 via release candidates. Despite several major infrastructural changes, it should be very easy to migrate for end users, as things overall just got simpler.

First step is to add the current release candidate (RC1) to your pubspec.yaml:

  dependencies:
    audioplayers: ^1.0.0-rc.4

This document contains the list of major changes to be aware of.

Federation, simplified platform interface

This change is here as an introduction but should require no change whatsoever on users side. But we split the package using the official Federation process from Flutter. You still only need to import the final package audioplayers into your project, but know that that will fetch the relevant implementations for each platform you support as audioplayers_x packages.

In order to support this, we also created a vastly simplified audioplayers_platform_interface that is allowing us to add support for other platforms (eg desktop) much easier but removing any shortcuts or duplicated methods and leaving everything that can, to be implemented on the Dart side. Platforms have to deal with only the most basic building blocks they have to implement, and nothing else.

AudioCache is dead, long live Sources

One of the main changes was my desire to "kill" the AudioCache API due to the vast confusion that it caused with users (despite our best efforts documenting everything).

We still have the AudioCache class but its APIs are exclusively dedicated to transforming asset files into local files, cache them, and provide the path. It however doesn't normally need be used by end users because the AudioPlayer itself is now capable of playing audio from any Source.

What is a Source? It's a sealed class that can be one of:

  1. UrlSource: get the audio from a remote URL from the Internet
  2. DeviceFileSource: access a file in the user's device, probably selected by a file picker
  3. AssetSource: play an asset bundled with your app, normally within the assets directory
  4. BytesSource (only some platforms): pass in the bytes of your audio directly (read it from anywhere).

If you use AssetSource, the AudioPlayer will use its instance of AudioCache (which defaults to the global cache if unchanged) automatically. This unifies all playing APIs under AudioPlayer and entirely removes the AudioCache detail for most users.

Simplified APIs, one method per task

We removed multiple overrides and used the concept of Sources to unify methods under AP. Now we have base, separated methods for:

  1. setSource (taking a Source object; we also provide setSourceX as shortcuts)
  2. setVolume
  3. setAudioContext (though consider using AudioPlayer.global.setGlobalAudioContext instead)
  4. resume (actually starts playing)

We still have (other than the handy setSourceX methods) one shortcut left: the play method. I think it's important to keep that as it might be easiest way for the most simple operation; it does:

  1. set the source via a Source object
  2. optionally sets the volume
  3. optionally sets the audio context
  4. optionally sets the position to seek
  5. optionally sets the player mode
  6. resumes (starts playing)

All in one go. We might decide whether to keep this shortcut or what parameters exactly to have on a next refactor. But for now we are very happy that we no longer have play and playBytes being essentially clones with different sources on AudioPlayer and then AudioCache having its own versions + looping versions (it was chaotic before).

Enum name consolidation, some files were shuffled around

As per Dart's new best practices, all enums on the Dart side now have lowercase constants.

Also, some files might have been shuffled around (even between packages), but nothing that your IDE won't be able to quickly sort out.

AudioContext

For some people, this will be irrelevant. For others, this might be the biggest change. Basically we collected all the random flags and parameters that were related to audio context/session configuration spread through the codebase on different methods, at different stages, into a single, unified configuration object called AudioContext, that can be set globally or per player (only for Android).

For more details, check the Audio Context section on the Getting Started tutorial, or the class documentation itself (which is very comprehensive).