Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Degraded performance in case of salvo [diesel] #201

Open
shujaatak opened this issue Dec 20, 2022 · 4 comments
Open

Degraded performance in case of salvo [diesel] #201

shujaatak opened this issue Dec 20, 2022 · 4 comments

Comments

@shujaatak
Copy link

While checking out https://www.techempower.com/benchmarks/#section=data-r21
I found that salvo [diesel] lies at 115 position while xitca-web [diesel] lies at awesome 15 position!

image
image

I love salvo as I find it easy to use so wondering what degrades the performance in diesel case and if it will be fixed any time soon?

@chrislearn
Copy link
Member

You don't need to pay too much attention to the results of this performance test, which has a lot to do with the test code itself. For example, what third-party libraries are used by the test code.

Take the comparison between xitca-web[diesel] and salvo[diesel], xitca-web[diesel] uses diesel-async library, but salvo[diesel] does not use it.

@shujaatak
Copy link
Author

Take the comparison between xitca-web[diesel] and salvo[diesel], xitca-web[diesel] uses diesel-async library, but salvo[diesel] does not use it.

Nice analysis!

By the way, it would be great if you could please update the Salvo[Diesel] example so that people looking at techempower benchmarks don't get misled.

@Hans-Wu-cn
Copy link

I have also discovered this issue, and I am wondering if it is possible to test the communication capability and memory stability of various web frameworks under the same configuration (without testing the performance of SQL related libraries), as well as the serialization performance

@chrislearn
Copy link
Member

In fact, the performance differences between several web frameworks based on hyper are very small. If the performance of a certain test (such as diesel) is low, it is often just a problem with the writing of the test code.

Overall, the performance of salvo is not low, and in some cases the tested performance is even higher than axum.

Although techempower's tests are more authoritative, some frameworks are not maintained in a timely manner and the various dependencies used (rust version, docker image) are inconsistent, which can also lead to performance problems. Typically, the performance of hyper is not as good as axum in many places. This should not be the case.

The performance test is just a reference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants