-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make internal & private classes sealed where possible. Make private methods static where possible #689
base: master
Are you sure you want to change the base?
Conversation
…ethods static where possible
I'll try to take a more detailed look at this eventually, but I am currently a little short on time, so please expect some waiting time. Like I mentioned over in #687, Even with changes to private members that don't affect the public API surface, I am not sure whether they are worth spending much time on at this time:
But yeah I'll review as soon as I find the time. |
No stress @stakx :-) It could be great to have a benchmark in BenchmarkDotNet to measure the impact, to see if it is worthwhile.
And on newer .NET, the CLR might have become so good at devirtualization & PGO that the difference is not measurable. |
I've done similar refactorings in the past for Moq, which is built on top of DynamicProxy, and I occasionally ran benchmarks using both BenchmarkDotNet and PerfView. When testing isolated methods that you've refactored/optimized against their unoptimized versions, BenchmarkDotNet might tell you it's become X% faster. However as soon as you benchmark code that includes anything System.Reflection.Emit, you no longer see any actual improvement because much, much more time is spent there than in your own user code... any micro-optimizations like All that is to say, when doing these refactorings, we should do them with the goal of improving code correctness or legibility, not for any performance gains (because, I'm sorry to say, those almost certainly won't manifest in a practically meaningful quantity). |
From what I've read, the CLR is indeed very good at de-virtualization, even with interface dispatch. There are places inside DynamicProxy where private fields are interface-typed but they hold instances of a known class type, say: private IDictionary<...> foo = new Dictionary<...>(); ... and it seems a bit silly to force the CLR to go through interface dispatch or optimize it away when we could've just used the concrete static type in the first place: private Dictionary<...> foo = new(); In those cases I think it can make sense to refactor but mostly because there's no real gain in abstracting/hiding If such a refactoring brings perf gains along with it, that's a nice bonus, but by themselves they would probably be too minuscule to actually matter, so they shouldn't be the driving factor for the refactoring. All of the above is simply my opinion based on past experiences with a different but related library... I'll happily change my mind in case I'm wrong about anything. |
Yeah, it could be nice to do some just of the "correctness" refactorings, but it is hard to verify that there are no regressions, due to black magic runtime stuff... At least on my hardware, where different BenchmarkDotNet runs (on other projects) produces different results for the same code... To measure these "correctness" refactorings, you would need a non-throttling machine, with minimal background tasks. Slightly off topic: Disable Turbo/Boost, for a constant clock without throttling. A good use for older computers? Instruction sets, cache sizes & RAM speeds + sizes might even be more representative of the computers out there? |
BTW, by runtime black magic I mean that the generated IL should be more optimized, but it is hard to know the resulting machine code looks like after PGO & other optimizations, and how "correctness" optimizations affect the black magic |
sealed
/static
part of #687, except for DictionaryAdapterBy doing this, lookups in Virtual Method Table before calling are avoided.