-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow DSLs deployments to load a parser from the precompiled file without waiting for evaluators to load #297
Conversation
…immediatly, without waiting for a evaluator to finish loading
867bed2
to
c44ce90
Compare
Looks good as code. I'm sceptical wrt the design and the maintainability for the future. Please explain why we actually need this additional support. Not the motivation from above, but the reason there are still seconds to load the parser. I thought I had covered all the tracks. Is a parser generator still loading? Or are we avoiding the loading of any module of the language implementation now? I'm hesitant with this design, since if this additional API is avoidable, that would be better since: (a) it is a lot of code, (b) an extra externally visible API, which (c) it more than doubles the number of fields in the current contract for It also breaks the consistency of the parametrized-lsp design that goes normally hrough Rascal callbacks ( So I need some convincing; are there no alternatives for optimizing this problem that keep the design as-is? What is currently taking the time before the loaded parser is presented to the LSP link? |
Thanks for reviewing. Currently, the first parse starts the extension, which does:
Now, this already takes a lot less with the precompiled module parsers and the precompiled stored parser. But it still can (especially on older/slower laptops) take up to 10s--20s. Quite some of the time (as also seen in the profiles in #267) is spend in parsing rascal modules, and general import stuff, I have not been able to identify further opportunities for performance reductions. So indeed, the final step I propose to close the gap is to skip all this module loading. I agree it does grow the interface on the typescript side, and I don't really like that. We could mark it as deprecated/internal/experimental? The benefit is that the npm dependencies don't follow VS Code upgrades, so if we deprecate it at a later point, people's extensions won't be broken, since they'll have to explicitly update. An alternative that would get us a bit closer would be to use the multiple registry feature to load a specific one that only has the syntax module imported. This would reduce the import times, but it would impact memory pressure, as we would be having yet another evaluator alive just for that syntax module. |
Thanks for the analysis. 10 or 20 seconds to load a few modules does not make sense to me. Something is very weird in this entire profile.
In theory, loading 10 modules might take around 3 seconds, worst case, but not 10 or 20 seconds. So I think we have to dig a little deeper here. If we really can't find the issue, well then we have to come up with some temporary solution like you proposed. Note that the latest solution makes the concrete syntax parser cache dead. But perhaps this is for other use cases still handy. I'm going to do some code staring based on the profiles you shared. |
|
|
Looking at the profile, most of the I've shared you a trace file so you can see what's happening. |
I've tested this, it works out as intended, the grammar loads fast and the users sees syntax highlighting well before the evaluator is loaded. |
55c6942
to
fc8fe11
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great and good that it works. We have the reservations regarding the temporariness of this interface in place. I just would like to see the reuse of the store parser behind the interface that I designed for this, and not the implementation copied. Do you want me to fix this? I think I'd take only 30 minutes or so.
private final String name; | ||
private final String extension; | ||
private final @Nullable Exception loadingParserError; | ||
private final @Nullable Class<IGTD<IConstructor, ITree, ISourceLocation>> parserClass; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why leak these implementation details if Prelude.loadParsers hides all of that? All you have to do is call that function with the right source location. You receive an IFunction
that can be called
directly with an input source location or string (first parameter), then the location used for the @loc
annotations. The options are provided with the loadParsers method. If you don't want to instantiate a Prelude, a RascalFunctionValueFactory instance with an empty Evaluator for the constructor will also do fine.
With this you have saved 130 lines of code, and also you can hide behind the interface in case future parser implementations change the way they are implemented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After the IFunction is acquired, there is no more need for the RascalFunctionValueFactory and its empty Evaluator, or a Prelude instance. The function instance will lock on the evaluator only when/if it is called from the evaluator, and not if called manually via the IFunction.call
interface.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to copy that code since it had a hard dependency on Evaluator. I specificaly wanted not to load an evaluator.
rascal-lsp/src/main/java/org/rascalmpl/vscode/lsp/parametric/ParserOnlyContribution.java
Show resolved
Hide resolved
rascal-lsp/src/main/java/org/rascalmpl/vscode/lsp/parametric/ParserOnlyContribution.java
Outdated
Show resolved
Hide resolved
rascal-lsp/src/main/java/org/rascalmpl/vscode/lsp/terminal/ITerminalIDEServer.java
Show resolved
Hide resolved
rascal-lsp/src/main/java/org/rascalmpl/vscode/lsp/terminal/ITerminalIDEServer.java
Show resolved
Hide resolved
Yes, I've tested it, this alternative approach works. I really wanted to avoid starting an evaluator. But I agree, just constructing it and not doing anything with it should be an acceptable tradeoff. |
Kudos, SonarCloud Quality Gate passed! |
Note, this is only for VS Code DSL deployments, not for normal interactive development.
What we've observed is that users associate no syntax highlighting as an indication that "it" is not working. When a VS Code extension that uses rascal starts, it can take up to 5 or 10s (depending on the machine even longer) before the all the modules are loaded, and the parser function is handed to the lsp server. Many improvements have already been made (see excellent discussion in #267) but we wanted to have the opportunity to really close the gap.
This change builds upon the work of @jurgenvinju to offer an opaque file that contains precompiled parsers. It's only available when using the typescript (aka npm/deployment) path, but in that case it will load a parser without any need for an evaluator.