Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

maestral pauses sync when a dropbox server error is encountered #463

Closed
the-wes opened this issue Oct 5, 2021 · 13 comments · Fixed by #474
Closed

maestral pauses sync when a dropbox server error is encountered #463

the-wes opened this issue Oct 5, 2021 · 13 comments · Fixed by #474
Labels
bug Something isn't working

Comments

@the-wes
Copy link

the-wes commented Oct 5, 2021

Describe the bug
While a sync is in progress, this stack trace is generated:

...
2021-10-04 17:04:09 sync INFO: Indexing 957324...
...
2021-10-04 17:05:58 sync INFO: Syncing â 461/500
2021-10-04 17:06:18 manager ERROR: Dropbox server error
Traceback (most recent call last):
File "/home/anaconda3/lib/python3.8/site-packages/maestral/client.py", line 126, in convert_api_errors
yield
File "/home/anaconda3/lib/python3.8/site-packages/maestral/client.py", line 1155, in list_remote_changes_iterator
result = self.dbx.files_list_folder_continue(result.cursor)
File "/home/anaconda3/lib/python3.8/site-packages/dropbox/base.py", line 1976, in files_list_folder_continue
r = self.request(
File "/home/anaconda3/lib/python3.8/site-packages/dropbox/dropbox_client.py", line 323, in request
res = self.request_json_string_with_retry(host,
File "/home/anaconda3/lib/python3.8/site-packages/dropbox/dropbox_client.py", line 479, in request_json_string_with_retry
return self.request_json_string(host,
File "/home/anaconda3/lib/python3.8/site-packages/dropbox/dropbox_client.py", line 602, in request_json_string
raise InternalServerError(request_id, r.status_code, r.text)
dropbox.exceptions.InternalServerError: InternalServerError('593a9ccaf0c94894a0d057c39ae9d615', 500, '')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/anaconda3/lib/python3.8/site-packages/maestral/manager.py", line 791, in _handle_sync_thread_errors
yield
File "/home/anaconda3/lib/python3.8/site-packages/maestral/manager.py", line 769, in startup_worker
self.sync.download_sync_cycle(client)
File "/home/anaconda3/lib/python3.8/site-packages/maestral/sync.py", line 2829, in download_sync_cycle
for changes, cursor in changes_iter:
File "/home/anaconda3/lib/python3.8/site-packages/maestral/sync.py", line 2888, in list_remote_changes_iterator
for changes in changes_iter:
File "/home/anaconda3/lib/python3.8/site-packages/maestral/client.py", line 1156, in list_remote_changes_iterator
yield result
File "/home/anaconda3/lib/python3.8/contextlib.py", line 131, in exit
self.gen.throw(type, value, traceback)
File "/home/anaconda3/lib/python3.8/site-packages/maestral/client.py", line 128, in convert_api_errors
raise dropbox_to_maestral_error(exc, dbx_path, local_path)
maestral.errors.DropboxServerError: Dropbox server error. Something went wrong on Dropboxâs end. Please check on status.dropbox.com if their services are up and running and try again later.
2021-10-04 17:06:18 manager INFO: Shutting down threads...
2021-10-04 17:06:18 sync INFO: Sync aborted
2021-10-04 17:06:18 manager INFO: Paused
2021-10-05 04:58:31 manager INFO: Connecting...
2021-10-05 04:58:41 manager INFO: Connected

To Reproduce
I don't know for sure, but I suspect the root cause of this is the absurdly large number of files I'm trying to sync. Dropbox's native sync daemon handles this level of volume well, it just doesn't run on Centos 7 like I need. So I don't think this is necessarily a limitation in Dropbox's API.

Expected behaviour
Maestral should wait a random amount of time and attempt to continue the sync from where it left off.

System:

  • Maestral version: 1.5.0
  • Python version: 3.8.3
  • OS: Centos 7
  • Desktop environment: none, running headless
  • PyQt version (for Linux GUI): none

Additional context
none

@the-wes the-wes added the bug Something isn't working label Oct 5, 2021
@the-wes
Copy link
Author

the-wes commented Oct 5, 2021

as you can see, it aborted yesterday, and when I noticed it today, I restarted it. it ran for about an hour before aborting again with the same output. I should probably grab that log output and compare to see if it dies at exactly the same spot, which could indicate a problem with a particular file it's trying to sync.

@the-wes
Copy link
Author

the-wes commented Oct 6, 2021

I noticed #289 is possibly a similar issue

@samschott
Copy link
Owner

Hm, internal Dropbox server errors are handled differently depending on the API call which raised them.

If such an internal error is raised while syncing a specific file, other sync jobs will continue to work and syncing that specific file will be retried later. If the error occurs when trying to fetch a list of remote files / changes in the first place, the sync is currently just aborted. The underlying reason for this is that the latter server errors typically have occured only during scheduled maintenance or when the https://status.dropbox.com indicated actual server issues. Retrying immediately does not make sense in such cases and the issue typically takes a few hours to be resolved.

However, especially in your case, I can see that a retry logic can be useful here. For now, restarting the sync manually will indeed resume from where it was interrupted.

Just out of interest, how many items are you trying to sync? There was a previous issue #133 which reported arbitrary request timeouts when attempting to index > 600k files. This was fixed by a similar retry logic.

@the-wes
Copy link
Author

the-wes commented Oct 6, 2021

I did see #133 however I'm not having some of the other issues mentioned there... status and activity don't hang, stop and start return OK and not killed, etc. and now that I look again, I do indeed see that the log claims "resuming index" after the restart so hopefully it will finish eventually, once I restart it enough times. here is the latest log:

...
2021-10-05 11:59:36 sync INFO: Syncing â 462/500
2021-10-05 11:59:38 sync INFO: Indexing 965664...
2021-10-05 11:59:39 sync INFO: Syncing â 1/500
2021-10-05 11:59:41 sync INFO: Syncing â 2/500
2021-10-05 11:59:45 sync INFO: Syncing â 12/500
2021-10-05 11:59:47 sync INFO: Syncing â 25/500
2021-10-05 11:59:49 sync INFO: Syncing â 43/500
2021-10-05 11:59:51 sync INFO: Syncing â 51/500
2021-10-05 11:59:55 sync INFO: Syncing â 55/500
2021-10-05 11:59:58 sync INFO: Syncing â 91/500
2021-10-05 12:00:00 sync INFO: Syncing â 94/500
2021-10-05 12:00:04 sync INFO: Syncing â 102/500
2021-10-05 12:00:07 sync INFO: Syncing â 137/500
2021-10-05 12:00:09 sync INFO: Syncing â 142/500
2021-10-05 12:00:13 sync INFO: Syncing â 145/500
2021-10-05 12:00:20 sync INFO: Syncing â 181/500
2021-10-05 12:00:22 sync INFO: Syncing â 216/500
2021-10-05 12:00:27 sync INFO: Syncing â 217/500
2021-10-05 12:00:30 sync INFO: Syncing â 224/500
2021-10-05 12:00:34 sync INFO: Syncing â 260/500
2021-10-05 12:00:38 sync INFO: Syncing â 264/500
2021-10-05 12:00:40 sync INFO: Syncing â 292/500
2021-10-05 12:00:44 sync INFO: Syncing â 301/500
2021-10-05 12:00:47 sync INFO: Syncing â 308/500
2021-10-05 12:00:49 sync INFO: Syncing â 343/500
2021-10-05 12:00:54 sync INFO: Syncing â 345/500
2021-10-05 12:00:58 sync INFO: Syncing â 378/500
2021-10-05 12:01:00 sync INFO: Syncing â 379/500
2021-10-05 12:01:03 sync INFO: Syncing â 388/500
2021-10-05 12:01:06 sync INFO: Syncing â 426/500
2021-10-05 12:01:09 sync INFO: Syncing â 439/500
2021-10-05 12:01:13 sync INFO: Syncing â 452/500
2021-10-05 12:01:15 sync INFO: Syncing â 476/500
2021-10-05 12:01:17 sync INFO: Syncing â 486/500
2021-10-05 12:01:43 manager ERROR: Dropbox server error
Traceback (most recent call last):
File "/home/anaconda3/lib/python3.8/site-packages/maestral/client.py", line 126, in convert_api_errors
yield
File "/home/anaconda3/lib/python3.8/site-packages/maestral/client.py", line 1155, in list_remote_changes_iterator
result = self.dbx.files_list_folder_continue(result.cursor)
File "/home/anaconda3/lib/python3.8/site-packages/dropbox/base.py", line 1976, in files_list_folder_continue
r = self.request(
File "/home/anaconda3/lib/python3.8/site-packages/dropbox/dropbox_client.py", line 323, in request
res = self.request_json_string_with_retry(host,
File "/home/anaconda3/lib/python3.8/site-packages/dropbox/dropbox_client.py", line 479, in request_json_string_with_retry
return self.request_json_string(host,
File "/home/anaconda3/lib/python3.8/site-packages/dropbox/dropbox_client.py", line 602, in request_json_string
raise InternalServerError(request_id, r.status_code, r.text)
dropbox.exceptions.InternalServerError: InternalServerError('8a0af87ca88f4455b00765369c86885d', 500, '')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/anaconda3/lib/python3.8/site-packages/maestral/manager.py", line 791, in _handle_sync_thread_errors
yield
File "/home/anaconda3/lib/python3.8/site-packages/maestral/manager.py", line 769, in startup_worker
self.sync.download_sync_cycle(client)
File "/home/anaconda3/lib/python3.8/site-packages/maestral/sync.py", line 2829, in download_sync_cycle
for changes, cursor in changes_iter:
File "/home/anaconda3/lib/python3.8/site-packages/maestral/sync.py", line 2888, in list_remote_changes_iterator
for changes in changes_iter:
File "/home/anaconda3/lib/python3.8/site-packages/maestral/client.py", line 1156, in list_remote_changes_iterator
yield result
File "/home/anaconda3/lib/python3.8/contextlib.py", line 131, in exit
self.gen.throw(type, value, traceback)
File "/home/anaconda3/lib/python3.8/site-packages/maestral/client.py", line 128, in convert_api_errors
raise dropbox_to_maestral_error(exc, dbx_path, local_path)
maestral.errors.DropboxServerError: Dropbox server error. Something went wrong on Dropboxâs end. Please check on status.dropbox.com if their services are up and running and try again later.
2021-10-05 12:01:43 manager INFO: Shutting down threads...
2021-10-05 12:01:43 sync INFO: Sync aborted
2021-10-05 12:01:43 manager INFO: Paused
2021-10-06 00:53:13 manager INFO: Connecting...
2021-10-06 00:53:23 manager INFO: Connected

I did notice that you reacted to his >600k files as a lot. I think mine might be over 900k, so that's why I described it as "absurdly high." do you have any info on any sort of rate limits in dropbox's API? as I said, their native client handles this volume fine, but I don't know if they use the same protocol for sync that yours does. since their is proprietary and closed source, there isn't a great way for me to tell.

@samschott
Copy link
Owner

samschott commented Oct 6, 2021

I did see #133 however I'm not having some of the other issues mentioned there...

Yeah, those problems were fixed in response to #133 :)

I don't know if they use the same protocol for sync that yours does

I am quite certain that they use a private API instead of the public one. Notably, they can do things such as sync only the chunks of a file which changed instead of uploading / downloading the entire file, create symlinks on the Dropbox server, etc. I have submitted feature requests for Dropbox to support this in their public API as well. They however do not openly disclose their roadmap.

do you have any info on any sort of rate limits in dropbox's API?

There is rate limiting applied to requests, however Dropbox does not disclose the actual limits as those may change over time and without warning. Dropbox Business accounts have limits for file transfer API calls which the admin can see.

In any case, such rate limiting is handled properly by Maestral or the Dropbox SDK by backing off for the appropriate amount of time and should not manifest itself as a InternalServerError.

I do indeed see that the log claims "resuming index" after the restart so hopefully it will finish eventually, once I restart it enough times

Yes, it does resume where you left off, though the indexing counter might appear to start from 0 again (I'll need to check if I fixed that). For now, you best best really is to manually restarting syncing until it has finally run through. Apologies for the inconvenience.

@samschott
Copy link
Owner

@rschinkoeth, I don't think that what you are seeing is related to this issue. Downgrading to v1.4.8 likely resulted in an inconsistent sync state - you are now syncing only the personal folder again but Maestral's database structure still reflects the Team Space folder. This is dangerous and can potentially result in files being inadvertently deleted. Could you check the log file for more detailed error messages and open a new issue to track this?

@rschinkoeth
Copy link

rschinkoeth commented Oct 9, 2021

oh... i started the sync fresh, with new token, deleted local storage including the maestral dotfile. is the db stored elsewhere? i'll find out myself where, it just hadn't occurred to me it could be elsewhere :/

if that's the case, my issue would basically be resolved, because i can live with 1.4.8.

if not, i'll post a new issue tomorrow.

thanks!

@samschott
Copy link
Owner

i started the sync fresh, with new token, deleted local storage including the maestral dotfile

This will reset the DB automatically, so all should be well!

@the-wes
Copy link
Author

the-wes commented Oct 11, 2021

after restarting the initial sync a few more times, it did eventually complete and now appears to be keeping both sides up to date reliably. I suppose at this point this bug report should become a feature request. should I open a new one and close this one for that?

the request being to have maestral retry a few times before giving up when dropbox returns an internal server error. even in the case of a maintenance window or known outage, every time dropbox has such an event, all maestral users must connect to their instances and restart maestral. I feel like it could keep trying, using standard random back-off logic (wait a few seconds first, and increase the wait time at each interval up to, say, 20 minutes?).

@rschinkoeth
Copy link

rschinkoeth commented Oct 12, 2021

since i was desperate i came up with a very dirty fix script. it displays some stats & restarts maestral if it breaks.
and sorry for the name beforehand, it's not meant to be rude, i was just annoyed as this issue has been driving me nucking futs for the last few days.

https://github.com/rschinkoeth/maestralfuckup

feel free to use. out of the box requires:
zsh
nload
tmux
maestral

cd to maestralfuckup folder, run ./mfuck.sh (cant be run from anywhere cause lazyness)
not sure if all stop cases are covered, but for me it's working as a dirty fix.

@samschott
Copy link
Owner

I suppose at this point this bug report should become a feature request. should I open a new one and close this one for that?

Leaving this as a bug report is fine. I'll discuss the issue with Dropbox support as well but some retry logic on our side makes sense to me.

@samschott
Copy link
Owner

sorry for the name beforehand, it's not meant to be rude, i was just annoyed as this issue has been driving me nucking futs for the last few days.

I'm not sure if I approve of the name, but it's your script and therefore your choice 😉

@rschinkoeth
Copy link

rschinkoeth commented Oct 14, 2021

it's more an insult towards dropbox official then in your direction. i'm so fed up with them about limiting the client in every damn way possible, gimping public api and so on. like the thing pissing me off the most is the fs limitations. for me virtiofs with ext 4. maestral: zero issues. dropbox complains cause it doesnt see the underlying ext4. or no sync to nfs storage. that's zfs. it's just beyond dumb. if i wouldnt have free unlimited w 180days rollback I'd have long moved on from this utter... clusterfuck of a service. and tbh even tho i do have this for free i'm thinking about moving. to something you can rsync to, have a LOT more control etc. sadly the available options are pretty expensive if you need like 5+ tb of storage with high bandwidth...

sorry for derailing and venting. this week this service has been driving me NUTS. and now i will shut up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants