Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Subgroup Level Bluestein Algorithm #145
base: main
Are you sure you want to change the base?
Subgroup Level Bluestein Algorithm #145
Changes from 19 commits
cef2ad0
8cc05db
5237a37
5292f41
a3aa5d6
5b89d10
b0d1661
5afe834
8a83350
eb3291f
011e780
8c3b40b
20e3c78
ae929d3
708893c
594d224
5f1ab4d
99d8cfb
243c793
3a77953
cfd2ab8
d0e705d
823b84f
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The other lenght is also committed, making this variable name confusing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See #145 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still think committed_length is confusing. How about
ct_length
(standing for Cooley-Tukey)?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
committed_length is not the cooley_tukey length. Committed length is the user provided length (i.e the length of the dimension as in the descriptor)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case the other one could be ct_length. We just need something to make it obvious which is which.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It cannot be ct_length, we should not be tying it with the name of an algorithm, as it isn't always cooley tukey.
I believe
length
, along with its description above should be suffiicientThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which other algorithm can it be?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
length
andcommitted_length
, what does this (and eg. num_factors) refer to?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'length' and 'committed length' fields are used calculation of twiddles and setting some spec-constants like fft size.
Later down the control flow when we need to calculate twiddles , I would've repeatedly need to compare length and committed_length to check if there is any padding or not, instead this is set in the constructor of dimension_struct itself, indicating that a prime algorithm is used
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm lost - I don't think you've really answered my question of whether
num_factors
is a the number of factors ofcommitted_lenght
orlength
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I forgot to add the line about
num_factors
,num_factors
is the number of factors oflength
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think the change of parameter type to IdxGlobal makes sense here, as the function needs to cast it back to std::size_t anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its to avoid an additional casts inside prepare implementation, which currently works with IdxGlobal,
note that even previously we used to cast the size to IdxGlobal inside prepare implementation, instead this directly accepts an IdxGlobal
It returns a size_t because a lot of our calculation pertaining to that field works on size_t, hence returning size_t to avoid casts there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use std::size_t in public interface and IdxGlobal in kernels. Somewhere inbetween we need to cast from one type to another. I just think this should happen just once, without any casts back to std::size_t. I am not particularly attached to where that is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The field
length
in the dimension_struct is stored as std::size_t, which is passed to other functions like num_scalars_in_local_mem, which accepts it as a size_t, hence that cast to size_t upon return.we used to cast the fft_size to IdxGlobal inside the prepare_implementation here previously presumably to avoid warnings, and hence seeing that I simply made it accept IdxGlobal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But that requires additional casts, which make code harder to read. Make it so that we only cast to IdxGlobal, not back.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused by this. I thought the signal was just zero-padded?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chirp signal is wrapped around before computing it's DFT , as in the convolution, both positive and negative index values are required
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that the chirp signal is the one which is convoluted with the input in the bluestein algorithm