You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, we are in the process of upgrading our deployment of knowledge-repo from a pretty old version. Part of this process is giving some much needed TLC to our usage of the system, but we are also doing our best to hopefully help the upstream product and contribute back where possible.
To that end, we are to a point with our new test deployment where we are getting what appears to be some thread clashing as each gunicorn worker is spawning its own watchdog when in reality only one needs to, right? The symptom is below:
Process Process-1:1:
Traceback (most recent call last):
File ".../python3.6/site-packages/knowledge_repo/app/models.py", line 134, in wrapped
return function(*args, **kwargs)
File ".../python3.6/site-packages/knowledge_repo/app/index.py", line 141, in update_index
current_repo.update()
File ".../python3.6/site-packages/knowledge_repo/repositories/gitrepository.py", line 144, in update
self.git.branches[branch].checkout()
File ".../python3.6/site-packages/git/refs/head.py", line 219, in checkout
self.repo.git.checkout(self, **kwargs)
File ".../python3.6/site-packages/git/cmd.py", line 542, in <lambda>
return lambda *args, **kwargs: self._call_process(name, *args, **kwargs)
File ".../python3.6/site-packages/git/cmd.py", line 1005, in _call_process
return self.execute(call, **exec_kwargs)
File ".../python3.6/site-packages/git/cmd.py", line 822, in execute
raise GitCommandError(command, status, stderr_value, stdout_value)
git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)
cmdline: git checkout master
stderr: 'fatal: Unable to create '.../repo/.git/index.lock': File exists.
Granted, the single worker that gets the lock does the update as needed, so from the outside looking in everything works. It would be nice if these errors could be cleaned up though, so it doesn't look like things are failing.
I threw together a small changeset that appears to fix the problem which gives me some confidence that I have correctly characterized the problem, but I am not entirely sure this is the best way to handle this as every thread is still going to try to initiate an index operation, and I am unaware of how this will affect those not using gunicorn as their deployment strategy. I do see code that looks like it is protecting from a horizontally-scaled deployment (i.e., multiple replicas of knowledge-repo running against the same repository) from clashing, but within an individual deployment, I think similar guards are needed still, or am I wrong?
Thanks for making this software available! It has been a great communication tool at Recursion Pharmaceuticals.
The text was updated successfully, but these errors were encountered:
Highly appreciate it, and whenever you get to it is fine, we can hack on a fork for now. Let me know if I can help out in any way.
Should I be adding reviewers or tagging people on any PRs I open or do you all have visibility into that? I only ask because I opened two pretty small ones around the same time I was working on this thing, and I wasn't sure if I should be tagging people or not. Don't take this as an obligation to look at them now, I am just genuinely curious what the "right" way to suggest edits for this project is.
For context, the one is a fairly straightforward bug fix (#584), but the other (#583) is porting a feature from gunicorn that I don't know if you want to support or not. Again, I don't want to rush you or force these features on you, I just want to make sure they also don't fall through the cracks.
Thanks again for continuing to maintain this project!
Hi, we are in the process of upgrading our deployment of
knowledge-repo
from a pretty old version. Part of this process is giving some much needed TLC to our usage of the system, but we are also doing our best to hopefully help the upstream product and contribute back where possible.To that end, we are to a point with our new test deployment where we are getting what appears to be some thread clashing as each
gunicorn
worker is spawning its own watchdog when in reality only one needs to, right? The symptom is below:Granted, the single worker that gets the lock does the update as needed, so from the outside looking in everything works. It would be nice if these errors could be cleaned up though, so it doesn't look like things are failing.
I threw together a small changeset that appears to fix the problem which gives me some confidence that I have correctly characterized the problem, but I am not entirely sure this is the best way to handle this as every thread is still going to try to initiate an index operation, and I am unaware of how this will affect those not using
gunicorn
as their deployment strategy. I do see code that looks like it is protecting from a horizontally-scaled deployment (i.e., multiple replicas ofknowledge-repo
running against the same repository) from clashing, but within an individual deployment, I think similar guards are needed still, or am I wrong?Thanks for making this software available! It has been a great communication tool at Recursion Pharmaceuticals.
The text was updated successfully, but these errors were encountered: