We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi,
I find this a bit unintuitive:
>>> bitmath.KiB(1/3).to_Bit() Bit(2730.6666666666665)
I would expect this to throw an exception (I did not specify an integer count of bytes).
There seems to be a precision of 12 decimals, but a bit is non-divisible. So 0.666 bit is not a quantity of bits which can exist.
What is the purpose of using floating point sizes in bitmath, when you could represent bit count as an integer everywhere?
The text was updated successfully, but these errors were encountered:
I think the best solution will be to math.floor() results from division.
math.floor()
Sorry, something went wrong.
Things being a float by default is more of a core design choice I made forever ago that I need to look at again a lot more closely.
No branches or pull requests
Hi,
I find this a bit unintuitive:
I would expect this to throw an exception (I did not specify an integer count of bytes).
There seems to be a precision of 12 decimals, but a bit is non-divisible. So 0.666 bit is not a quantity of bits which can exist.
What is the purpose of using floating point sizes in bitmath, when you could represent bit count as an integer everywhere?
The text was updated successfully, but these errors were encountered: