Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add a symfony benchmark with custom normalizers #7

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

bendavies
Copy link

@bendavies bendavies commented Nov 15, 2018

I've half added this as I was intrigued to how it would perform, after I saw SerializardClosureBenchmark.

I'm not even entirely sure that I agree with this being included (as well as SerializardClosureBenchmark), as it makes the benchmarks non-comparable. You can't compare this new benchmark with the JMS benchmarks for example, as it's just comparing apples with pears - they are not doing the same thing. Unless ofcourse you added a JMS benchmark with custom handlers, like this one.

Edit: faster than JsonSerializableBenchmark - funny.

@goetas
Copy link
Member

goetas commented Nov 15, 2018

Does it make sense to add some kind of note on the benchmark results for some serializers?

@goetas
Copy link
Member

goetas commented Nov 15, 2018

Does this benchmark goes recursively trough the object graph? (I'm not a symfony serializer expert...)

@bendavies
Copy link
Author

Does it make sense to add some kind of note on the benchmark results for some serializers?

probably, or group the benchmarks into ones that use reflection, ones with custom normalizers etc...
could you add one for jms?

@bendavies
Copy link
Author

Does this benchmark goes recursively trough the object graph? (I'm not a symfony serializer expert...)

yes, i checked. I think some assertions on the normalized data should be added, to check like for like results. Then issues like #4 should have been caught

@goetas
Copy link
Member

goetas commented Nov 15, 2018

probably, or group the benchmarks into ones that use reflection, ones with custom normalizers etc...
could you add one for jms?

Not sure about this. IMO it does not matter how a serializer gets the data, the important is the final result.

Does it make sense to verify somehow the result of the serialization process? Like checking the resulting JSON?

@goetas
Copy link
Member

goetas commented Nov 15, 2018

Actually jms uses internally both reflection and closure, switching automatically between them to get the best performance result.

@dunglas
Copy link
Member

dunglas commented Nov 15, 2018

Well, it's a perfectly valid (and recommended) usage of the Symfony Serializer: it's one of its strengths, it comes with a very handy (but slow, even if it's improving dramatically thanks to @fbourigault and @bendavies) normalizer, and you can easily create your custom, optimized ones.

It's very similar to what Serializard allows. Anyway, I'm not sure that it should be in this benchmark. Or we should add a note somewhere in the README explaining that the feature set of JMS, ObjectNormalizer etc, and the one of JsonSerializable, SerializardClosure, and Symfony custom normalizers are definitely not comparable.

@goetas
Copy link
Member

goetas commented Nov 15, 2018

@dunglas
I agree with yow. We should probably add somewhere in the result a note on the type of serializes.

Similar to what I have asked few lines before

Does it make sense to verify somehow the result of the serialization process? Like checking the resulting JSON?

@bendavies
Copy link
Author

Does it make sense to verify somehow the result of the serialization process? Like checking the resulting JSON?

yep, i wrote that above 👍

@fbourigault
Copy link

Does it make sense to verify somehow the result of the serialization process? Like checking the resulting JSON?

👍

It's hard to compare how a benchmark is fast without looking at which features it has. What about defining a common list of features (such as custom-normalizer, groups, etc...) and for each benchmark return the list of the enabled features.

Adding a note when displaying the benchmark results could be valuable too.

@tsantos84
Copy link

I agree with @fbourigault regarding the serializer's capabilities. We need to define a top 3 (or more) features used by the community and compare only those libraries which implements these features. In addition, the benchmark runtime should force the libraries to use these features. For example, each benchmark should implement one custom normalizer, define one virtual property, define one serialization group and so on. Only with these features in act we'll make a fair comparison among the serializers.

@tsantos84
Copy link

Does it make sense to verify somehow the result of the serialization process? Like checking the resulting JSON?

Yes. It totally make sense! I've created my own benchmark application a few time ago and I can say that we need to validate the output. I was trying to understand why a specific library could be so fast and some minutes after I realized that library doesn't work properly with collections of objects and doesn't serialize the entire object graph.

Challenge: how can we validate the output without side effect on measuring, I mean, the validation shouldn't influence on the mean time.

@thunderer
Copy link

thunderer commented Nov 19, 2018

I understand the overall sentiment of not comparing solutions with different feature sets, but serializer definition is simple: any tool which traverses the object graph to produce its representation in a different format (and back). I think that we should define the list of capabilities a serializer library can have and then prepare a feature matrix, while still comparing all of them in the same list. These capabilities could be defined with specific test cases where test runner provides data and verifies the result, with the list of serializers as data provider array.

Examples: serializes objects, serializes arrays, serializes scalars, detects object graph cycle, allows custom normalizers, supports deserialization, allows custom hydrators, allows custom formats, supports JSON, supports XML, supports YAML. Feel free to add yours.

BTW @tsantos84 as your benchmarking project contains the most libraries, could you maybe port the missing ones here? Let's use the momentum generated here and prepare the PHP serializer benchmark. :)

@tsantos84
Copy link

I can port the missing libraries and agree with you about capabilities, since we clearly show to users each feature each library has. Otherwise people looking for such tool can choose a non suitable library to its use case.

@tsantos84
Copy link

BTW, I have in my repository a Symfony Serializer with custom normalizer. Can I bring it to here?

@goetas
Copy link
Member

goetas commented Nov 19, 2018

BTW, I have in my repository a Symfony Serializer with custom normalizer. Can I bring it to here?

Isn't this PSR exactly about a custom symfony Serializer normalizer?

@goetas
Copy link
Member

goetas commented Nov 19, 2018

I start to think that @thunderer has a valid point. Should be up to the user compare features while the benchmark should just do the benchmaking.
Any criteria we decide as "top 3 features" for a serializer could be not part of some candidates even if they are perfectly valid serializers for some use cases.

@tsantos84
Copy link

tsantos84 commented Nov 19, 2018

BTW, I have in my repository a Symfony Serializer with custom normalizer. Can I bring it to here?

Isn't this PSR exactly about a custom symfony Serializer normalizer?

Yes. Replied to this thread believing I was on an issue instead of the PR. Ignore it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants