-
-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
maestral pauses sync when a dropbox server error is encountered #463
Comments
as you can see, it aborted yesterday, and when I noticed it today, I restarted it. it ran for about an hour before aborting again with the same output. I should probably grab that log output and compare to see if it dies at exactly the same spot, which could indicate a problem with a particular file it's trying to sync. |
I noticed #289 is possibly a similar issue |
Hm, internal Dropbox server errors are handled differently depending on the API call which raised them. If such an internal error is raised while syncing a specific file, other sync jobs will continue to work and syncing that specific file will be retried later. If the error occurs when trying to fetch a list of remote files / changes in the first place, the sync is currently just aborted. The underlying reason for this is that the latter server errors typically have occured only during scheduled maintenance or when the https://status.dropbox.com indicated actual server issues. Retrying immediately does not make sense in such cases and the issue typically takes a few hours to be resolved. However, especially in your case, I can see that a retry logic can be useful here. For now, restarting the sync manually will indeed resume from where it was interrupted. Just out of interest, how many items are you trying to sync? There was a previous issue #133 which reported arbitrary request timeouts when attempting to index > 600k files. This was fixed by a similar retry logic. |
I did see #133 however I'm not having some of the other issues mentioned there... status and activity don't hang, stop and start return OK and not killed, etc. and now that I look again, I do indeed see that the log claims "resuming index" after the restart so hopefully it will finish eventually, once I restart it enough times. here is the latest log: ... The above exception was the direct cause of the following exception: Traceback (most recent call last): I did notice that you reacted to his >600k files as a lot. I think mine might be over 900k, so that's why I described it as "absurdly high." do you have any info on any sort of rate limits in dropbox's API? as I said, their native client handles this volume fine, but I don't know if they use the same protocol for sync that yours does. since their is proprietary and closed source, there isn't a great way for me to tell. |
Yeah, those problems were fixed in response to #133 :)
I am quite certain that they use a private API instead of the public one. Notably, they can do things such as sync only the chunks of a file which changed instead of uploading / downloading the entire file, create symlinks on the Dropbox server, etc. I have submitted feature requests for Dropbox to support this in their public API as well. They however do not openly disclose their roadmap.
There is rate limiting applied to requests, however Dropbox does not disclose the actual limits as those may change over time and without warning. Dropbox Business accounts have limits for file transfer API calls which the admin can see. In any case, such rate limiting is handled properly by Maestral or the Dropbox SDK by backing off for the appropriate amount of time and should not manifest itself as a
Yes, it does resume where you left off, though the indexing counter might appear to start from 0 again (I'll need to check if I fixed that). For now, you best best really is to manually restarting syncing until it has finally run through. Apologies for the inconvenience. |
@rschinkoeth, I don't think that what you are seeing is related to this issue. Downgrading to v1.4.8 likely resulted in an inconsistent sync state - you are now syncing only the personal folder again but Maestral's database structure still reflects the Team Space folder. This is dangerous and can potentially result in files being inadvertently deleted. Could you check the log file for more detailed error messages and open a new issue to track this? |
oh... i started the sync fresh, with new token, deleted local storage including the maestral dotfile. is the db stored elsewhere? i'll find out myself where, it just hadn't occurred to me it could be elsewhere :/ if that's the case, my issue would basically be resolved, because i can live with 1.4.8. if not, i'll post a new issue tomorrow. thanks! |
This will reset the DB automatically, so all should be well! |
after restarting the initial sync a few more times, it did eventually complete and now appears to be keeping both sides up to date reliably. I suppose at this point this bug report should become a feature request. should I open a new one and close this one for that? the request being to have maestral retry a few times before giving up when dropbox returns an internal server error. even in the case of a maintenance window or known outage, every time dropbox has such an event, all maestral users must connect to their instances and restart maestral. I feel like it could keep trying, using standard random back-off logic (wait a few seconds first, and increase the wait time at each interval up to, say, 20 minutes?). |
since i was desperate i came up with a very dirty fix script. it displays some stats & restarts maestral if it breaks. https://github.com/rschinkoeth/maestralfuckup feel free to use. out of the box requires: cd to maestralfuckup folder, run ./mfuck.sh (cant be run from anywhere cause lazyness) |
Leaving this as a bug report is fine. I'll discuss the issue with Dropbox support as well but some retry logic on our side makes sense to me. |
I'm not sure if I approve of the name, but it's your script and therefore your choice 😉 |
it's more an insult towards dropbox official then in your direction. i'm so fed up with them about limiting the client in every damn way possible, gimping public api and so on. like the thing pissing me off the most is the fs limitations. for me virtiofs with ext 4. maestral: zero issues. dropbox complains cause it doesnt see the underlying ext4. or no sync to nfs storage. that's zfs. it's just beyond dumb. if i wouldnt have free unlimited w 180days rollback I'd have long moved on from this utter... clusterfuck of a service. and tbh even tho i do have this for free i'm thinking about moving. to something you can rsync to, have a LOT more control etc. sadly the available options are pretty expensive if you need like 5+ tb of storage with high bandwidth... sorry for derailing and venting. this week this service has been driving me NUTS. and now i will shut up. |
Describe the bug
While a sync is in progress, this stack trace is generated:
...
2021-10-04 17:04:09 sync INFO: Indexing 957324...
...
2021-10-04 17:05:58 sync INFO: Syncing â 461/500
2021-10-04 17:06:18 manager ERROR: Dropbox server error
Traceback (most recent call last):
File "/home/anaconda3/lib/python3.8/site-packages/maestral/client.py", line 126, in convert_api_errors
yield
File "/home/anaconda3/lib/python3.8/site-packages/maestral/client.py", line 1155, in list_remote_changes_iterator
result = self.dbx.files_list_folder_continue(result.cursor)
File "/home/anaconda3/lib/python3.8/site-packages/dropbox/base.py", line 1976, in files_list_folder_continue
r = self.request(
File "/home/anaconda3/lib/python3.8/site-packages/dropbox/dropbox_client.py", line 323, in request
res = self.request_json_string_with_retry(host,
File "/home/anaconda3/lib/python3.8/site-packages/dropbox/dropbox_client.py", line 479, in request_json_string_with_retry
return self.request_json_string(host,
File "/home/anaconda3/lib/python3.8/site-packages/dropbox/dropbox_client.py", line 602, in request_json_string
raise InternalServerError(request_id, r.status_code, r.text)
dropbox.exceptions.InternalServerError: InternalServerError('593a9ccaf0c94894a0d057c39ae9d615', 500, '')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/anaconda3/lib/python3.8/site-packages/maestral/manager.py", line 791, in _handle_sync_thread_errors
yield
File "/home/anaconda3/lib/python3.8/site-packages/maestral/manager.py", line 769, in startup_worker
self.sync.download_sync_cycle(client)
File "/home/anaconda3/lib/python3.8/site-packages/maestral/sync.py", line 2829, in download_sync_cycle
for changes, cursor in changes_iter:
File "/home/anaconda3/lib/python3.8/site-packages/maestral/sync.py", line 2888, in list_remote_changes_iterator
for changes in changes_iter:
File "/home/anaconda3/lib/python3.8/site-packages/maestral/client.py", line 1156, in list_remote_changes_iterator
yield result
File "/home/anaconda3/lib/python3.8/contextlib.py", line 131, in exit
self.gen.throw(type, value, traceback)
File "/home/anaconda3/lib/python3.8/site-packages/maestral/client.py", line 128, in convert_api_errors
raise dropbox_to_maestral_error(exc, dbx_path, local_path)
maestral.errors.DropboxServerError: Dropbox server error. Something went wrong on Dropboxâs end. Please check on status.dropbox.com if their services are up and running and try again later.
2021-10-04 17:06:18 manager INFO: Shutting down threads...
2021-10-04 17:06:18 sync INFO: Sync aborted
2021-10-04 17:06:18 manager INFO: Paused
2021-10-05 04:58:31 manager INFO: Connecting...
2021-10-05 04:58:41 manager INFO: Connected
To Reproduce
I don't know for sure, but I suspect the root cause of this is the absurdly large number of files I'm trying to sync. Dropbox's native sync daemon handles this level of volume well, it just doesn't run on Centos 7 like I need. So I don't think this is necessarily a limitation in Dropbox's API.
Expected behaviour
Maestral should wait a random amount of time and attempt to continue the sync from where it left off.
System:
Additional context
none
The text was updated successfully, but these errors were encountered: