-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduction of Gibbs Phenomenon on FFT compression #124
Conversation
@rukai asking for your help on this review due to the way vectors/arrays are being padded, sliced and cloned. Any suggestions to improve the performance of this code is welcome! Thanks, Carlos |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heres a bunch of things you can do to reduce allocations.
That loop scares me though, I dont have the math skills to reduce it to an expression, but if its possible then that seems important.
/// Given a number N, checks what is the next number that is in the form of (2^N * 3^M) | ||
pub fn next_size(mut n: usize) -> usize { | ||
n += 1; | ||
while !is_decomposable(n) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all this looping looks slow, are you sure there isnt a way to get the same result with an expression instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prime numbers are a pain! And I don't see how I can improve this. Let me see if I find a better solution.
This is only run once per frame on compression and decompression.
Given that we force frames into a subset of sizes (and power of 2^N) I don't expect this to loop for long. On larger numbers it might become problematic.
Alternative is providing a table of next number for the expected frame sizes and calculate for everything else? (We have some pre-defined frame sizes hardcoded anyway).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A table does sound like a good idea.
Could be left for follow up work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching issues at e2e. It's a bit embarrassing to see how I messed up here.
Everyone makes mistakes, the tests did their job and that is why I caught them! :) |
Current FFT implementation disregards the Gibbs phenomenon at the boundaries of the frame. This creates artifacts that need correction at those transitions.
This PR fixes the Gibbs phenomenon at the frame edge by expanding the frames during the FFT calculation (and inverse calculation) with either the first or last value of the frame.
This creates an attenuation effect without impacting the compression ratio. The tradeoff is more memory consumption (up to 5%) on the use of the FFT compressor.
This image illustrates a frame transition with the current code base and the one purposed by this fix.