You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Suggestion
Provide a %catchall token_name syntax thingy that declares that everything that cannot be parsed as any defined token or terminal should be returned as the token token_name. This would allow for forgiving syntax definitions, that have a catchall that doesn't literally catch all characters but only everythin that would otherwise cause a syntax error.
Describe alternatives you've considered
I've tried using catchall definitions like a /.+/s at lower priority than all other tokens but that just returned the whole source code I gave it in single characters as that catchall token, fully ignoring all valid defined tokens. Even if it wouldn't have caused a syntax error without the catchall it just singled out every character.
Additional context
This could be useful for e.g. markup parsing. I'm creating my own markup language right now and having forgiving syntax that cannot error out, just return only-half-usable results would help a lot.
The text was updated successfully, but these errors were encountered:
If the catchall is /.+/, this would still happen, just starting at the first error. And using /./ with a low priority should work for the earley parser.
I don't think there is a viable way to implement catchall for lalr. If you have a concrete idea, feel free to suggest it. Note that backtracking is not an acceptable solution.
You are probably better of using the scan function I coded up in other issues, most recently in #1424 to find the parsable subsets, but something like html is just a terrible fit for lark anyway, since it's context sensitive and can't really be parsed correctly anyway.
Suggestion
Provide a
%catchall token_name
syntax thingy that declares that everything that cannot be parsed as any defined token or terminal should be returned as the tokentoken_name
. This would allow for forgiving syntax definitions, that have a catchall that doesn't literally catch all characters but only everythin that would otherwise cause a syntax error.Describe alternatives you've considered
I've tried using catchall definitions like a
/.+/s
at lower priority than all other tokens but that just returned the whole source code I gave it in single characters as that catchall token, fully ignoring all valid defined tokens. Even if it wouldn't have caused a syntax error without the catchall it just singled out every character.Additional context
This could be useful for e.g. markup parsing. I'm creating my own markup language right now and having forgiving syntax that cannot error out, just return only-half-usable results would help a lot.
The text was updated successfully, but these errors were encountered: