-
Notifications
You must be signed in to change notification settings - Fork 5k
TimeSpan.FromSeconds does not return expected value after recent change to precision #13230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
cc: @Anipik, @tannergooding |
This has to do with There are some tricks that could be done to keep it accurate without incurring the rounding loss/etc, but they may come at a perf cost. |
@tannergooding I wonder why don't you want to round? I think rounding is the best thing to do here, I always use round in my numeric code when I'm casting from float to int, it's 50% more precise in average. The performance cost is barely measurable. If you have SSE4.1 it's a single instruction, roundsd. If you don't have SSE4.1 slightly more, but also cheap. |
Yes this change in behaviour confused me no end. The documentation is now wrong (because it is documented to round to 1ms) - I've described this in a stackoverflow question |
Keeping this in 5.0 to at least document the new/changed behavior. |
@tannergooding were you planning to do something here? |
I don't think there is anything reasonable that could be done for .NET 5 aside from documenting the change. Anything further would require quite a bit of investigation and would likely hit some barriers simply due to the precision loss introduced by floating-point values |
We have come across this bug in .NET 5, after migrating from .NET 4.6 to .NET 5/6. As you can see from below that the .NET 5 version has less precision than .NET 4.6 so returns different values. The .NET 5 TimeSpan.FromMilliseconds() method has chopped some of the value off. We found this because it caused our app to crash when trying to parse what was previously 800 to an int.
.NET 4.6 (good)
.NET 5 or 6 (bad)
|
This is a case of more precision, not less. |
At first it looks like more but it is less, so .NET 5 round down when it should be rounding up. .NET 4.6 = 0.79999999999999993 |
I've done some investigation and found the reason for the bug in .NET 5/6/7: All versions use a
BUT the main different is that .NET 4.7 has an extra line to handle double rounding, whereas the other .NET versions do not, which causes the bug by truncating the double value without adjusting for double rounding before type casting to a long. [Edit] I've corrected my above statement about the bug, but I don't see any strike-thru option. .NET 4.7
.NET 5
|
I have hacked a copy of the .NET 5.0 TimeSpan code to test this for FromMilliseconds(). (yes I know I can folk the code :))
|
The TimeSpanTests is missing the edge cases for doubles and only has safe values, I believe it should also include numbers like 0.99999999999999993.
|
This was deleted in dotnet/coreclr#24279 for some reason. |
The logic isn't really correct and isn't correctly accounting for rounding or floating-point inaccuracies for large values. |
Maybe as stated it's just a "hack" (from .NET 4.7 code) to demonstrate there is a bug in .NET 5/6/7, as the From methods use doubles the code does not correctly handle doubles. :) |
The algorithm, realistically, should be breaking the The integral portion, provided its in range of The fractional portion then needs to be carefully handled since scaling it will result in precision loss. For
|
@tannergooding The old way was strictly better. The rounding used in old .NET runtime resulted in 0.25 ticks average rounding error when converting from double to ticks, and no systematic bias. Here’s a demo of that systematic bias caused by truncation.
The old runtime prints The correct answer is zero: the body of the loop first multiplies the timespan by a constant, then divides by the same constant. |
It was strictly better for some inputs and worse for others. We should not "fix" this by putting back in still broken behavior. We should fix this properly and ensure that it works for both large and small values, which I gave a description of how to do in my last comment. |
This is notably not correct. A simplistic example is |
Do you have a test case when it’s worse?
The formula is more complex than just FP64 math because
I don’t believe the example is relevant because Math.PI * double.Epsilon is about 1.48E-323. If the value is seconds, not only it’s well below 1 TimeSpan tick, it’s even well below Planck time i.e. doesn’t make any sense. |
We have had and have fixed multiple precision bugs in the last few releases, especially where inputs were being unnecessarily truncated and the full tick precision wasn't being respected.
Again, this was a simplistic example showing that you cannot and should not rely on The correct fix here is to update the algorithm to correctly break apart the |
There is some decently complex math that goes into this, but the premise is that the Likewise, the primitive operations (including The integer and fractional portions can always be split into their own "exact" values. This leaves a fractional portion which, provided the scale amount is less than |
Can you recall or find any of these? Without knowing what specifically was fixed, it’s not really possible to come up with a good solution. But from general perspective, a simple rounding should be enough. FP64 precision is enough to represent exact ticks up to 28.5 years or something. And these
I’m not sure that’s actually correct. Ideally, look for a well-tested off-the-shelf library which implements either FP128 (like boost) or arbitrary-precision (like GMP but beware of the license) However, I don’t think it’s worth the complexity for the time spans. At least not until I saw a realistic use case which demonstrates the errors of the simple FP64 + rounding approach. These libraries are very complicated and not particularly fast, I only using them when I absolutely need to. |
The perf here won't be hurt and isn't going to involve anything complex. In fact, several parts will likely become faster or mitigated by other handling we already have/need.
You're looking at about 15 instructions (in the worst case) with no more than 2 (generally predicted) branches to get the relevant parts and ensure this is a correctly computed result and without introducing rounding error. -- On modern hardware, you can actually extract these parts in 2 instructions since there are dedicated hardware support for this.
There are a number of bugs related to Many of these are related to anything more precise than The most correct thing, and which will not cause any kind of significant slowdown but which will ensure correct computation is to handle this in a way that minimizes rounding error and best preserves the input values given by the user. This would involve using |
The more I use .NET 5 the more it looks like there is a problem with the System.Double type. .NET 5
.NET FW 4.7.2
|
That is, Since you cannot construct Using |
.NET Framework has "many" bugs around floating-point parsing and formatting that resulted in precision being lost and incorrect rounding. These are bugs that cannot be fixed due to the very high compat bar that .NET Framework has. .NET Core, since 3.0, has fixed these bugs and documented the bug fix (breaking change). There was a blog post that went out at the time describing the change, the why, and some of the problematic scenarios that could be hit: https://devblogs.microsoft.com/dotnet/floating-point-parsing-and-formatting-improvements-in-net-core-3-0/ The simplest example is that .NET Framework will not correctly handle .NET Core 3.0 and later ensures that the value roundtrips by default and so there is no loss in precision in the displayed/formatted string. The actual computation for |
@tannergooding Thanks for the detailed reply, it is appreciated, and the link. Using We are evaluating the impact on our software given the link you have provided. |
Following dotnet/coreclr#24279 that improves precision of TimeSpan.FromMilliseconds, I'm having issues with compatibility with .NET Framework, and more importantly weird inconsistencies in test code:
.NET Framework 4.7.2
outputs:
.NET Core 3.0 preview 7:
outputs:
I don't really mind the fact that there will be a difference in behavior between netfx and .NET Core (loss in precision vs not), but the main issue for me is that calling
TimeSpan.FromSeconds(CONSTANT)
andTimeSpan.FromMilliseconds(CONSTANT * 1000)
on 3.0-preview7 does not return the same value (different Ticks).This caused a lot of failing unit tests similar to
Assert.That(...., Is.EqualTo(TimeSpan.FromSeconds(123.45)))
that pass on netfx but not in .NET Core due to the weird behavior.Looking at the implementation:
TimeSpan.FromSeconds(78043.43)
callsTimeSpan.Interval(78043.43, 10000000)
which computesdouble millis = 78043.43 * 10000000
which gives780434299999.99988
, but then is truncated to a long, producing780434299999
ticks, which is off by one.TimeSpan.FromMilliseconds(78043430)
callsTimeSpan.Interval(78043430, 10000)
somillis
is now780434300000
which does not change when truncated to a long.remark: maybe the variable should now be renamed into 'ticks' instead of 'millis' here?
Maybe the +0.5/-0.5 rounding should still be applied, but on the ticks before truncating?
The text was updated successfully, but these errors were encountered: