-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hot-Reloading Feature Implementation #46
Conversation
|
@mohamedsalem401 Thank you so much for your contribution! 🎉 I'll review this PR as soon as possible, probably today or tomorrow. |
@mohamedsalem401 first this is awesome 🎉 and it works. In terms of implementation i'm just trying to grok how this works exactly. For my part, I had been imagining a super-simple implementation like:
Here there seems to be a bunch of direct writing to the markdowndb database. Is that right? Plus there seems to be some kind of check to identify if something changed. And if so, why do we need to do that? General point: I think we want the do any updating of the database through markdowndb toolchain? (otherwise we risk duplication - and, one day, being out of sync when markdowndb changes) |
My main concerns are:
The current proposal suggests updating all related information in three tables (files, file_tags, links) each time a file is modified. This process involves:
Additionally, implementing a feature to avoid adding links until a file with the corresponding ID appears in a link requires:
While this might not be a significant concern for a local database, it could pose challenges if these queries are performed on a remote database, especially considering the frequency of updates during every keystroke in the markdown editing process. If querying on the database isn't a big deal, we could consider removing and then adding every changed file. This will make the code simpler. What do you think of this idea? |
Well, the idea is to be efficient. Instead of updating the entire file, they want to identify and update only the parts that actually changed. For example, if a user adds a tag, the code compares the new and old versions of the file. Then, it runs just one query on one table (file_tags) to show the new changes. It's a way of making updates by only focusing on what really needs to change. In simple terms, we compare things to figure out exactly what needs to be updated. This way, we only use the smallest, most necessary update query to make the change happen. It's being super efficient in updating the database |
You run the markdowndb build pipepline for that file.
Ditto. And what kind of errors do you imagine?
This is super-cool - but i also wonder if it may be a bit of overkill to start with. I'd build the simplest thing possible and then come back to this if there are problems "YAGNI". In any case it should be
Yes, exactly i would do that. And i would do it in the markdowndb toolchain not in the watcher code. It seems to me here that you may have two (v useful) things combined together:
|
OK. I conclusion here is to reintroduce this once we have done #47 |
Here is the pull request, which introduces hot-reloading functionality using Chokidar.js. I've invested time and effort to simplify the new feature's architecture and anticipate potential enhancements in the future, such as:
Here's a brief overview of the implementation logic:
fileJson
file_tags
fileJson
where the ID matches the file ID.file_tags
where the file corresponds to the file ID.file_tags
table.from
field matches the file ID.to
field matches the file ID to the list of broken links and remove them from the links table.Tasks