-
-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add support for HTML, Markdown and Typst files #127
Conversation
CodSpeed Performance ReportMerging #127 will not alter performanceComparing Summary
|
Any idea what's going on with the tests? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Rolv-Apneseth, thanks for your PR!
See my comments and let me know what you think :-)
src/cli/check.rs
Outdated
"md" | "mkd" | "mdwn" | "mdown" | "mdtxt" | "mdtext" | "markdown" => { | ||
FileType::Markdown | ||
}, | ||
|
||
"html" | "htm" => FileType::Html, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any documentation or link that documents those many different extensions for the same filetype?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe I just copied this list from https://superuser.com/a/285878. Maybe I should also add mdx
as mentioned in a comment, or do you want to cut it down to just md
and markdown
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's better to either stick to simplicity, or to match the default behavior of some popular tool, like ripgrep's default list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, well I'll copy that list then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't be better to return a vector of data annotations here? So (1) we avoid memory allocation and (2) we can benefit from LT's support for annotated data. The latter also has the advantage that LT is going to compute the correct location of an error in the text, and we can then print annotated errors with respect to the raw content of the file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not too familiar with how the data annotation side of things work, and I had issues trying to use it with the current setup that splits the requests, but I can explore it a bit more if you want.
Some notes / questions though:
- Should all requests sent by the CLI for files be data annotations, including text, or does that need to be handled separately to the other file types
- The requests will be a lot larger than the current plain text ones right? So only much smaller files would be supported
- I will also need to change the current approach for markdown files as that wouldn't make sense any more
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I think all special types should be checked using data annotation. That excludes raw text files.
- That can be an issue with very large files, but let's not bother too much on that. That can be solved by automatically splitting the request into many (which can itself be a problem if we send too many requests to the public API, but people should host then own server).
- Probably, yes. Let me know if I can help!
"code" => { | ||
txt.push_str("_code_"); | ||
return; | ||
}, | ||
"a" => { | ||
txt.push_str("_link_"); | ||
return; | ||
}, | ||
"pre" => { | ||
txt.push_str("_pre_"); | ||
txt.push_str("\n\n"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a bit linked to my general file-comment, but there you could simply return a Markup
note, with no interpret_as
, and LT should ignore it, I think.
Hey @jeertmans, I'll get back to this in a couple days, thanks for the review though |
No issue, Merry Christmas and happy end of year! |
Thanks, you too :) |
Looks strange! Sometimes, the online version of the LT API returns different results than the one you can host locally. Did you run tests on a self-hosted API? |
Yes, it was with a self-hosted API. Looks like these errors are on the current v3 branch too though (#117), so I don't think these changes are related. |
Hey @jeertmans, sorry been busy the last couple days. I had a go there at changing the |
Oh sorry - forgot to mention I haven't done anything with splitting the data annotation requests, so you'll need to test against small files |
Hi @Rolv-Apneseth, sorry for the delay. I looked at your PR and it looks nice! Just a small question: is there any motivation to keep |
Hey @jeertmans, it was related to the comment I made above:
If you think the approach is alright, I can have a go at doing the same thing for Markdown and HTML. And as I mentioned, I haven't implemented any kind of splitting for the data annotation request so it will just fail for a lot of files due to the length. |
I think I'll merge like that, and we can still improve the model later. Let's focus first on more "fundamental" features, and improve those file-format specific features later. Thanks! |
Oh - I thought this file format stuff was kind of the last step towards v3. Could you clarify what steps are next? |
This might, but let me take a look at this (the original PR), and see what we (you) have achieved so far, as I lack a stepped-back view of that big rewrite. I will have some time this week, so let me ping you back once it's done ;-) |
Sure thing |
Continuing #117, as per our discussion.
This PR adds parsers for HTML, Markdown and Typst files, which are used in the CLI to support different file types. Auto detection per file is used by default, which will just choose a parser based on the file's extension.
I think the parsers are working OK, but wanted to open this PR as soon as possible so you can let me know what you think, and to see if you have some files that maybe this doesn't work great for.
I decided to take the same path that pyLanguagetool took for Markdown files, by first converting them to HTML, and then using a separate library for parsing that (html_parser in our case). I figured we'd probably want support for HTML eventually anyway, but you can let me know if you think I should take a similar approach to the HTML and Typst parsers.
As for the HTML and Typst parsers, I've inserted some "placeholders" in place of e.g. code blocks, links etc., which
LanguageTool
ignores, e.g._code_
,_math_
,_link_
. This seems to work well enough but if you have any other ideas on how to tackle these let me know and I can have a look.