-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uncatched exception "Connection refused" #28
Comments
Hello Laurent, Thanks for creating an issue. For handling connection errors, we're using _get_possible_broker_errors_tuple to try and piggyback on Celery's own error detection code. I'm surprised that Celery's This is definitely something I'd like to have fixed, so contributions (or any insight in to the problem) would bery very-much appreciated. The goal of Thanks |
Thanks for your quick answer. I'll try to fix it and send you a pull request as soon as I can. |
After a while I figured out what's happening. Inside task.run and task.on_success the state of the task is updated whether it's eager or not. So if your result backend is not running I figure a simple fix would be to check against Let me know what you think and if you agree I'll submit a pull request ! |
Ooooh. That makes sense, and I hadn't fully considered the case of a broken result backend. So one way of handling this would be to document that even if you use One necessary change for And also in For both of those cases, we need to find something equivalent to Does that make sense to you? Thanks |
Oops, and you're also totally right about this code in |
Well it seems that Is it really necessary to record an eager task's state ? By design celery's
|
Hi Laurent,
I think that was a good design choice in Celery itself, but for the kind of user-facing tasks that jobtastic deals with (where we care about progress and almost always also about the result), it adds complexity. A I think that the biggest problem you've identified is that someone using Jobtastic just for caching, thundering heard avoidance and the fault-tolerance helpers might have a task whose results they don't care about at all. In their case, breaking the task on our internal
I think that's another place where we should catch the exception, log a warning and keep going. The other necessary fix is:
Does that make sense to you? Basically, I think we should try to recover wherever it makes sense. Thanks |
Yes, makes perfect sense. |
Hi all, Any updates on this issue? I ran into the delay_or_eager bug recently with Celery 3.1 Thanks, Ed |
Hello Ed, No updates so far. Waiting for either a pull request or some magical time fairy to let me write it :) I still haven't upgraded to Celery 3.1 on my major project, so until that happens, I'm unlikely to be the one to write the pull request. Thanks |
Gotcha, thanks for the update, Wes. |
Hello,
First of all thanks for the great work, JobTastic is really neat and handy.
I tried running delay_or_eager and delay_or_fail with a stopped broker (rabbitMQ) but it seems that the resulting error isn't catched.
What I get is a 500 "[Errno 111] Connection refused".
I'm working on a very simple test case with the bare minimum. My setup is Celery (3.0) along with JobTastic (0.2.2) and amqp (1.0.13).
I figure this shouldn't be a hard fix so let me know if you are aware of this issue or if I'm doing something wrong here. I'll be glad to contribute.
The text was updated successfully, but these errors were encountered: