-
Notifications
You must be signed in to change notification settings - Fork 6
06. Compression
As I explore ideas for compression, I'll be writing about the things I try (successes, failures, and ideas that lead to false hope) here.
(commit)
Take the raw event representation and do a frequency transform of the Δt prediction residuals. Quantize the resulting coefficients. After getting the inverse transform, it looks very messy as in the screenshot below. This is because errors compound. An inaccurate reconstructed Δt value will have lasting effects on that pixel's future event timings. Therefore, each group events that gets a frequency transform should be independent of any prior group of events.
(commit)
Use the raw event representation in AbsoluteT
mode. Encode the t of the first event in each block directly. Subsequent events get the t-residual compared to the first event. Take the DCT of those residuals, and quantize it.
Results in a much better picture, without temporal decoherence. However, there are many salt-and-pepper style noise events introduced. Why?
Increasing Δt_max
seems to make this noise worse.
(commit)
Use the raw event representation in AbsoluteT
mode. Encode the t of the first event in each block directly. Subsequent events get the t-residual compared to the first event. Bitshift each residual in the block by the same amount to get the residuals to fit within i16s.
Less noise when base shift amount is 0.
More noise when base shift is 3 bits.