Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to retry operations #199

Open
gordalina opened this issue Jan 9, 2022 · 3 comments
Open

Add ability to retry operations #199

gordalina opened this issue Jan 9, 2022 · 3 comments

Comments

@gordalina
Copy link
Owner

Sometimes the network request to FPM fails, we want to be able to retry these operations.
Could solve for #188, #198

@jonaseberle
Copy link

Yes, also with --web it happens that we get 404 because it might be just too fast for some hosters to create the file and access it through the web near-instantaneously.
But on next try or with a little sleep it would work.

./cachetool stat:clear --web=SymfonyHttpClient --web-url=https://.../ --web-path=./public/ --web-host

Just retrying the whole command as in

./cachetool stat:clear ... || ./cachetool stat:clear ...

would not help with that problem because the created cache-busting PHP file changes its URL.

It is currently holding us a bit back on systems where we can't use --fcgi.

@sanderdlm
Copy link
Contributor

This is only for FPM? So the retry mechanism should be implemented inside the FastCGI adapter?

Using a try/catch and a do/while, which errors would we retry on? All of them? Wouldn't this create a problem where an error is encountered and repeated X amount of times before reporting back to the user?

The ReadFailedException from the FastCGI package seems to be the one thrown when a request to a socket is timed out, would retrying on this error be enough? This same error is also used for other scenarios though, for example when a socket doesn't even exist.

Maybe we should go upstream and ask https://github.com/hollodotme/fast-cgi-client to implement a specific RequestTimedOutException for this case? Then we could isolate the problem and handle it with a relatively easy try/catch.

Let me know what you think!

@gordalina
Copy link
Owner Author

@dreadnip thank you for contributing. I'll reply to your comments inline.

This is only for FPM? So the retry mechanism should be implemented inside the FastCGI adapter?

The best place to put the retry logic is on the CLI application, where all commands have a --retry=<number> option. This would maintain the library API consistent and would solve for the issues raised in #188 and #198.

Using a try/catch and a do/while, which errors would we retry on? All of them? Wouldn't this create a problem where an error is encountered and repeated X amount of times before reporting back to the user?

There are classes of errors where if they fail once, they'd fail all times. But given that this is covering an edge case, that sometimes happens, i don't think it would be an issue to retry on all errors. Of course if we have specific errors we know we can catch, we can prevent them from retrying, but at this point it would be ok to assume all of them can be retried.

The ReadFailedException from the FastCGI package seems to be the one thrown when a request to a socket is timed out, would retrying on this error be enough? This same error is also used for other scenarios though, for example when a socket doesn't even exist.

Maybe we should go upstream and ask https://github.com/hollodotme/fast-cgi-client to implement a specific RequestTimedOutException for this case? Then we could isolate the problem and handle it with a relatively easy try/catch.

Having the retry logic in the CLI application would make this a moot point.

Are you interested in contributing a patch?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants