We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For token-heavy grammars, instead of manually defining a set of token names, you can populate that set using a class property.
class classproperty: def __init__(self, fget): self.fget = fget def __get__(self, owner_self, owner_cls): return self.fget(owner_cls) class MyLexer(sly.Lexer) @classproperty def tokens(cls): return {x for x in cls.__dict__ if x.isupper()}
In the example above, the tokens class property returns a set of token names from the MyLexer object's uppercased field names.
tokens
This eliminates the need to manually add each token name as a string to a set as in:
class MyLexer(sly.Lexer) tokens = { 'NE', 'DIGITS' } NE = '!=' @_(r'\b([0-9.]+)[f]{0,1}\b') def DIGITS(self, t): pass
It would be nice if the user could instruct sly.lexer._build to automatically collect token names from the lexer object though.
sly.lexer._build
The text was updated successfully, but these errors were encountered:
No branches or pull requests
For token-heavy grammars, instead of manually defining a set of token names, you can populate that set using a class property.
In the example above, the
tokens
class property returns a set of token names from the MyLexer object's uppercased field names.This eliminates the need to manually add each token name as a string to a set as in:
It would be nice if the user could instruct
sly.lexer._build
to automatically collect token names from the lexer object though.The text was updated successfully, but these errors were encountered: