-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubo stopped listing pin after some times #10596
Comments
I can confirm we are seeing the same behaviour while testing Logs from ipfs-cluster:
No errors or other logs observed in Kubo. Also, calling the API endpoint or running |
Hello, per the details you provide, I just think leveldb exploded in some way. How many files are there in the Do you have monitoring for disk-reads? Is it trying to read/write a lot from disk even when nothing is being added? I would recommend to switch leveldb to pebble (so, flatfs + pebble). You will loose the pinset (not the blocks, just the list of things you have pinned) but cluster will add the pins again, in time. |
Yes, when switching you will need to edit Do you use MFS for anything? Also, regarding the ipfs-pins graph, it goes to 0 because of ipfs-cluster/ipfs-cluster#2122. From now on it will stay at the last reported amount when Even if it won't need to download data, it will need to add 16M items to the datastore, and pinning will make it traverse everything it has. |
Thank you, I'm going to try that today on one of our node.
No we don't use MFS, we only add new pin via the cluster API, and when needed we access our data via Kubo's gateway using the CIDs. As far as I understand this doesn't involve the MFS subsystems.
Good to know, thank you 👍 |
I have switch one of our node to using the pebble datastore, right now it is slowly adding back the whole pinset to pebble. |
hi |
@ehsan6sha still too soon to tell. Our node is still adding the data back into the pebble store. It has only catched up 50% of the previous pins right now. |
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days. |
@Mayeu how does it look like now? (assuming it finished re-pinning) |
Checklist
Installation method
built from source
Version
Config
Description
Hello,
We started to experience an issue with Kubo in our 2-node cluster where Kubo don't list pin anymore.
We have 2 nodes that both pin all the pinset we keep track of, which is around 16.39 million pins right now.
Last weeks (while we were still using 0.29), Kubo stopped responding to the
/pin/ls
queries sent by the cluster, those requests were hanging "indefinitely" (as in, when using curl I stopped the command after ~16h without response). Ouripfs-cluster
process is returning the following in the log when this happens:This started out of the blue, there was no change on the server. The issue remained after upgrading to 0.32.1.
At that time, we had the bloom filter activated, deactivating it did improve the situation for a while (maybe 24h), and then the issue started to show up again. In retrospect, I think it may not be related to the bloom filter at all).
This is the typical metrics reported by
ipfs-cluster
which show when Kubo stop responding to/pin/ls
:The graph on top is the number of pins the cluster is keeping track of, and on the one on the bottom is the number of pins reported by Kubo. When restarting Kubo it generally jumps to the expected amount, and after a while it drops to 0. At that point any attempt to list pin from Kubo fails.
We only have the metrics reported by ipfs-cluster because of this Kubo bug.
The server CPU, RAM, and disk utilization is fairly low when this issue show up, so it doesn't look like it a performance issue. The only metric that started to go out of bound is the number of open file descriptors which grow and reached the 128k limit set. I bumped it to 1.28 million, but it still reaches it (with or without the bloom filter):
The FDs limit is set both at the systemd unit level, and via
IPFS_FD_MAX
.Restarting Kubo make it work again most of the time, but sometimes it doesn't change anything and it instantly starts to fail.
Here is some profiling data from one of our nodes:
More info about the system:
logs
andcache
for ZFSKubo also emit a lot of:
But
ipfs swarm resources
doesn't return anything above 5-15%, so I think this error is actually on the remote node side and not related to our issue, right?Anything else we could gather to help solve this issue?
Right now I'm out of ideas to get our cluster back into a working state (beside restarting Kubo every 2h but that's not a solution since it will prevent us from reproving the pins to the rest of the network)
Edit with additional info:
--enable-gc
flag, as prescribed by ipfs-cluster doc.The text was updated successfully, but these errors were encountered: