-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle outOfMemory errors and exceptions #32
Comments
Even if we can have our deployment server beefed up enough to handle the high memory usage. I still think that this is worthwhile if we want to have individual developers testing their code by performing multiple website searches. |
I agree this would be good. Models are only loaded once they are requested, so if you only want to work on the Arabic part of the app, you can keep your memory requirements low by not querying English or Russian, for example. I could imagine the possibility of unloading a model to try a different language, but I don't know how feasible or worthwhile that would be. Regardless, the front end should be sensitive to these kinds of failures whenever possible. |
I have not done any work on this issue. I will say that the integration tests are able to successfully load each model when testing web search operations. This works because junit creates a new instance for each @test annotation, so the running process only uses one model at a time and then the memory is freed. Although in it's current state FLAIR doesn't handle these exceptions well, we can use junit to test different languages without a problem. |
I was just looking at the jar files for the NLP models, and all told they take up less than
We need to profile this to see what is taking up so much space. Maybe something like VisualVM (tomcat example here? Or maybe yourkit, which has a free license for open-source? Something that will tell us exactly which objects are taking up how much space. Maybe the models really are just that big, but it would be nice to be sure of that. ;-p |
When an individual server instance has to use more than three parser models, it results in an outOfMemory error from the server. On the user's end, all they see is a web page that is stuck loading documents. There should be a way for us to gracefully handle such situations. Ideally we would find a way to work around the high memory needs of this project.
The text was updated successfully, but these errors were encountered: