-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add a symfony benchmark with custom normalizers #7
base: master
Are you sure you want to change the base?
add a symfony benchmark with custom normalizers #7
Conversation
Does it make sense to add some kind of note on the benchmark results for some serializers? |
Does this benchmark goes recursively trough the object graph? (I'm not a symfony serializer expert...) |
probably, or group the benchmarks into ones that use reflection, ones with custom normalizers etc... |
yes, i checked. I think some assertions on the normalized data should be added, to check like for like results. Then issues like #4 should have been caught |
Not sure about this. IMO it does not matter how a serializer gets the data, the important is the final result. Does it make sense to verify somehow the result of the serialization process? Like checking the resulting JSON? |
Actually jms uses internally both reflection and closure, switching automatically between them to get the best performance result. |
Well, it's a perfectly valid (and recommended) usage of the Symfony Serializer: it's one of its strengths, it comes with a very handy (but slow, even if it's improving dramatically thanks to @fbourigault and @bendavies) normalizer, and you can easily create your custom, optimized ones. It's very similar to what Serializard allows. Anyway, I'm not sure that it should be in this benchmark. Or we should add a note somewhere in the README explaining that the feature set of JMS, ObjectNormalizer etc, and the one of JsonSerializable, SerializardClosure, and Symfony custom normalizers are definitely not comparable. |
@dunglas Similar to what I have asked few lines before
|
yep, i wrote that above 👍 |
👍 It's hard to compare how a benchmark is fast without looking at which features it has. What about defining a common list of features (such as Adding a note when displaying the benchmark results could be valuable too. |
I agree with @fbourigault regarding the serializer's capabilities. We need to define a top 3 (or more) features used by the community and compare only those libraries which implements these features. In addition, the benchmark runtime should force the libraries to use these features. For example, each benchmark should implement one custom normalizer, define one virtual property, define one serialization group and so on. Only with these features in act we'll make a fair comparison among the serializers. |
Yes. It totally make sense! I've created my own benchmark application a few time ago and I can say that we need to validate the output. I was trying to understand why a specific library could be so fast and some minutes after I realized that library doesn't work properly with collections of objects and doesn't serialize the entire object graph. Challenge: how can we validate the output without side effect on measuring, I mean, the validation shouldn't influence on the mean time. |
I understand the overall sentiment of not comparing solutions with different feature sets, but serializer definition is simple: any tool which traverses the object graph to produce its representation in a different format (and back). I think that we should define the list of capabilities a serializer library can have and then prepare a feature matrix, while still comparing all of them in the same list. These capabilities could be defined with specific test cases where test runner provides data and verifies the result, with the list of serializers as data provider array. Examples: serializes objects, serializes arrays, serializes scalars, detects object graph cycle, allows custom normalizers, supports deserialization, allows custom hydrators, allows custom formats, supports JSON, supports XML, supports YAML. Feel free to add yours. BTW @tsantos84 as your benchmarking project contains the most libraries, could you maybe port the missing ones here? Let's use the momentum generated here and prepare the PHP serializer benchmark. :) |
I can port the missing libraries and agree with you about capabilities, since we clearly show to users each feature each library has. Otherwise people looking for such tool can choose a non suitable library to its use case. |
BTW, I have in my repository a Symfony Serializer with custom normalizer. Can I bring it to here? |
Isn't this PSR exactly about a custom symfony Serializer normalizer? |
I start to think that @thunderer has a valid point. Should be up to the user compare features while the benchmark should just do the benchmaking. |
Yes. Replied to this thread believing I was on an issue instead of the PR. Ignore it. |
I've half added this as I was intrigued to how it would perform, after I saw
SerializardClosureBenchmark
.I'm not even entirely sure that I agree with this being included (as well as
SerializardClosureBenchmark
), as it makes the benchmarks non-comparable. You can't compare this new benchmark with the JMS benchmarks for example, as it's just comparing apples with pears - they are not doing the same thing. Unless ofcourse you added a JMS benchmark with custom handlers, like this one.Edit: faster than
JsonSerializableBenchmark
- funny.