-
Notifications
You must be signed in to change notification settings - Fork 8.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CouchDB queries are extremely slow after many queries are made #4835
Comments
Could you add debug logging to the peer and reproduce. Provide the log snippet showing exactly where the slowdown is. Also provide the Couchdb log. For peer add debug logging with the env variable:
|
It will help to compare the timings in the peer log and couchdb log. Can you provide a snippet from both peer and couchdb logs for a single request (rather than a screenshot). It would help to isolate an even slower request (small 30ms delays are often indistinguishable from noise). |
this is the complete log of a single request:
this specific request took 5 seconds whereas earlier requests take around 200ms which is like 25x less |
from the couchdb repo I found this, which might be of interest |
we used a different strategy for now, because the connection through hyperledger degrades really quickly so we for the reads we connect directly to the couchdb and this way couchdb doesn't suffer anymore. |
Thanks for providing the peer log snippet. The couchdb roundtrips are taking 30ms to 40ms when it slows down. The reason I had requested the couchdb logs as well, is because they will provide a timing for each database request from the CouchDB perspective. I wanted to see if the response times from the couchdb log matched the times in the peer logs, or if the response times in the peer log were significantly longer, indicating that the additional time was spent in the connection and connection management. Can you provide a similar snippet from couchdb log? |
Description
We have a chaincode which mints a great amount of tokens.
These tokens contain additional information which sometimes we need to reconstruct.
There might be some 100.000 tokens. Each of them having 100 couch keys associated to it
Our operation consists in reading these 100.000 tokens, and each call iterates over 100 keys with a grand total of 10.000.000 queries to couchdb in this case.
After one hour of queries everything's smooth with 200-400ms per TOKEN
After that the situation degrades quickly and we are left with one token request per minute or worse every two minutes: which abruptly ends in a DEADLINE_EXCEEDED error.
Steps to reproduce
to help this is the chaincode function we're calling 100.000 times
The text was updated successfully, but these errors were encountered: