Replies: 1 comment 1 reply
-
The subject you raise is at least adjacent to some topics we've discussed in the past. Those past discussions raised some interesting ideas, but changes have not yet materialized. We have considered, for example, asking framework maintainers to "endorse" certain implementations. An endorsed implementation could be represented by another badge similar to the "T" icon on frameworks that are included in the hardware performance calculation. Endorsement is effectively a framework maintainer conveying the implementation is consistent with how they expect the framework to be used in production environments. Similarly, we considered adding more types of "Implementation Approach." You can imagine, for example, a range of values such as the following, in roughly descending order of perceived value: "Endorsed/Canonical," "Production-Ready," "Experimental," and "Stripped." Related, past discussions included an intent to allow maintainers to document and describe the approaches they've taken and have that then made visible on the results web site, along with links to the frameworks, and so on. Basically give the contributors a way to communicate about their implementation approach and weight in on how readers can interpret the results. |
Beta Was this translation helpful? Give feedback.
-
Hi 👋 ,
I was looking through a handful of implementations and wanted to ask about the spirit of the "Implementation Approach" column in the data-grid.
The description is thus:
The description is interesting because it is ever so slightly different from a mental model that I have, which paints a dichotomy between:
Perhaps I am conflating the "Classification" field with the "Implementation Approach" field and that is the entirety of my confusion, but want to elaborate on my thoughts below to clarify:
I'm going to pick on aspnet here because I like the framework and enjoy using it, but it is not the only implementation which I think is not-realistic-marked-as-realistic:
As we can see, all of the aspnet core fortune tests are marked as realistic.
However, drilling down a little bit we can see that the implementation for the base fortunes benchmark is writing to a buffered writer (essentially stringbuilding) which is also defined in the project
This could very well be within the guidelines of "Realistic Approach", but to me seems a bit disingenuous.
The network effects are non-zero, and in some cases the data is used in official material.
I do understand it's difficult to fairly represent the spectrum of frameworks (a "realistic" approach for a barebones framework may be the same code as the c# above), but was curious if there were any thoughts of a more clear delineation.
Thanks for expending the time/$ to run these, it's always interesting to see the shape of the data and how it evolves! 🙇
Beta Was this translation helpful? Give feedback.
All reactions