Description
Following dotnet/coreclr#24279 that improves precision of TimeSpan.FromMilliseconds, I'm having issues with compatibility with .NET Framework, and more importantly weird inconsistencies in test code:
.NET Framework 4.7.2
Console.WriteLine(TimeSpan.FromSeconds(78043.43));
Console.WriteLine(TimeSpan.FromMilliseconds(78043430));
Console.WriteLine(TimeSpan.FromSeconds(78043.43) == TimeSpan.FromMilliseconds(78043430));
outputs:
21:40:43.4300000
21:40:43.4300000
True
.NET Core 3.0 preview 7:
Console.WriteLine(TimeSpan.FromSeconds(78043.43));
Console.WriteLine(TimeSpan.FromMilliseconds(78043430));
Console.WriteLine(TimeSpan.FromSeconds(78043.43) == TimeSpan.FromMilliseconds(78043430));
outputs:
21:40:43.4299999
21:40:43.4300000
False
I don't really mind the fact that there will be a difference in behavior between netfx and .NET Core (loss in precision vs not), but the main issue for me is that calling TimeSpan.FromSeconds(CONSTANT)
and TimeSpan.FromMilliseconds(CONSTANT * 1000)
on 3.0-preview7 does not return the same value (different Ticks).
This caused a lot of failing unit tests similar to Assert.That(...., Is.EqualTo(TimeSpan.FromSeconds(123.45)))
that pass on netfx but not in .NET Core due to the weird behavior.
Looking at the implementation:
TimeSpan.FromSeconds(78043.43)
callsTimeSpan.Interval(78043.43, 10000000)
which computesdouble millis = 78043.43 * 10000000
which gives780434299999.99988
, but then is truncated to a long, producing780434299999
ticks, which is off by one.TimeSpan.FromMilliseconds(78043430)
callsTimeSpan.Interval(78043430, 10000)
somillis
is now780434300000
which does not change when truncated to a long.
remark: maybe the variable should now be renamed into 'ticks' instead of 'millis' here?
Maybe the +0.5/-0.5 rounding should still be applied, but on the ticks before truncating?