Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chore: Audiobookshelf - Testing/ snapshots #1915

Closed
wants to merge 15 commits into from

Conversation

fmunkes
Copy link
Contributor

@fmunkes fmunkes commented Jan 26, 2025

Based on #1914

I started with the testing part, though there are no snapshots yet. I created multiple json response files over multiple versions of abs via the same curl commands.

My current questions are:

  • Shell script approach: is that reasonable?
  • Testing file - what I have in there tests properly the parsing part of the schema definition, if I am correct?
  • To also test my parse functions, which are part of the Audiobookshelf class, do I have to move them out of that class? I had a look at the jellyfin implementation, and that's how I understand it is done there.
  • I don't fully understand yet, when the snapshots come in, but I guess it's happening if I use my own parse functions.

@marcelveldt
Copy link
Member

@Jc2k is more an expert on this terrain but my first feeling is that you should probably split off the whole ABS logic into its own library, if you want you can even host it on MA org. In the end that is also our goal for the other providers.

Maybe not today but at some point you'd like to isolate the pure api handling from the interaction with MA.

@fmunkes
Copy link
Contributor Author

fmunkes commented Jan 27, 2025

@Jc2k is more an expert on this terrain but my first feeling is that you should probably split off the whole ABS logic into its own library, if you want you can even host it on MA org. In the end that is also our goal for the other providers.

Maybe not today but at some point you'd like to isolate the pure api handling from the interaction with MA.

I am willing to pursue that goal of having it separated at some point, but let's focus on having the current implementation "feature-complete", before jumping into this.

@Jc2k
Copy link
Contributor

Jc2k commented Jan 27, 2025

I agree that having an external lib would be good. I've set up aiojellyfin to do releases automatically when i push (and the feat/fix/chore tags control whether the major or minor version number gets bumped. So it doesn't add much complexity for my contributions when its setup up. I could probably help with setting that up.

That would mean at a minimum, abs_schema and abs_client would go into their own library on pypyi. They would not depend on any MA code. All the tests in this PR would go in that library. The tests in MA itself would target Audiobookshelf._parse_*.

Shell script approach: is that reasonable?

I'd definitely want to have something like that for aiojellyfin at some point, but I probably wouldn't commit it to the MA repo.

Testing file - what I have in there tests properly the parsing part of the schema definition, if I am correct?

Your tests at the moment test that the mashumaro stuff is valid enough to not explode, but without the snapshot asserts you aren't guarding against future regressions (unexpected changes to the output).

To also test my parse functions, which are part of the Audiobookshelf class, do I have to move them out of that class? I had a look at the jellyfin implementation, and that's how I understand it is done there.

I would move them yes.

If you follow the provider class hiearchy down to Provider you see you need a MusicAssistant, ProviderManifest and ProviderConfig to make an instance of any provider. We do have a helper to get a running MusicAssistant but it is much slower than testing the parse function directly.

The 'parsing' functions generally don't need to be async or do I/O, they are just transforming some mashumaro object or dictionary into one of our models like PodcastEpisode. They are generally stateless. In short, they are normally really good candidates for testing outside of a running MusicAssistant instance. Not only is this faster, but its usually more reliable.

I don't fully understand yet, when the snapshots come in, but I guess it's happening if I use my own parse functions.

Right now, your tests don't assert anything, so there are no snapshots. The code runs and that is a useful test but if you changed a model in such a way that it failed silently (optional field with a default value?) they wouldn't offer any value. Snapshot testing means you are asserting the output of the parsers hasn't changed since you last updated the snapshot. So if i updated a model and accidentally deleted a field, the test output would not longer match the recorded snapshot. Boom, the test has done its job.

@fmunkes
Copy link
Contributor Author

fmunkes commented Jan 27, 2025

After discussing on discord: Closing this in favor of moving the client into its own library with proper testing.
Thanks for the very detailed reply @Jc2k, that is all valuable information to use for me!

@fmunkes fmunkes closed this Jan 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants