Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a quick benchmark against psycopg2 & asyncpg #10

Merged
merged 2 commits into from
Dec 13, 2019
Merged

Add a quick benchmark against psycopg2 & asyncpg #10

merged 2 commits into from
Dec 13, 2019

Conversation

k4nar
Copy link
Contributor

@k4nar k4nar commented Dec 6, 2019

This is just indicative for now, but maybe it could be improved to give more meaningful results.

At least now we now that we clearly have an issue when iterating over results :) .

@k4nar k4nar requested a review from entwanne December 6, 2019 17:09
class AsyncpgColumnsBench(AsyncpgMixin, ColumnsBench):

def run(self, query):
self.loop.run_until_complete(self.conn.fetch(query))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think the fact you're running run_until_complete everytime may have an impact ?

I guess it's about comparing the libs in equal conditions (sync calls) vs in their central usecase.

I'd guess the 2 kinds of benchmarks make sense. A simple python script that has to run n times the queries recieved in stdin, and write the output in stdout, written in each lib's idiomatic way and you measure it with your shell's time would make sense too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted this to be as idiomatic as possible for now. I don't think run_until_complete really has an overhead compared to a simple await, but I'll check.

f"Mean: {mean:.4f} - "
f"Median: {median:.4f} - "
f"Max: {max_:.4f} - "
f"Min: {min_:.4f}"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason you limit to 4 digits ? Because you end up with a single significant digit, and it looks not very useful for comparing.

If you think the precision of perf_counter() cannot be trusted under this value, maybe it would be worth running each query 10, 100 or 1000 times, and then repeating this n times for statistics as you already do (except if you think there might be caching on the PG side)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was just to be easier to read, I want to look at trends not precise numbers. I guess I could use perf_rec_ns & use ns as a base unit instead.

SELECT generate_series(1, 10000)
Mean: 0.0050 - Median: 0.0047 - Max: 0.0086 - Min: 0.0039
---
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find this hard to read. I guess the info we want most (how slonik compares to other methods) is the most hidden.

What about graphing the result ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes I would like to store all the results and present a comprehensive result with the variations. This was just easier to write for a first draft 😄 .

To be honest we already have terrible results in this bench so already now where to improve. I think we'll improve the bench once we are closer to the other drivers, and add more real-life scenarios.

@k4nar
Copy link
Contributor Author

k4nar commented Dec 13, 2019

Let's merge this as a basis. We'll improve this bench later if we need to.

@k4nar k4nar merged commit 651d6e4 into master Dec 13, 2019
@k4nar k4nar deleted the benchmark branch December 13, 2019 11:57
@ewjoachim
Copy link

ewjoachim commented Dec 13, 2019

(LGTM too !)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants