Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate the need to cache #2

Closed
abeldekat opened this issue Jan 1, 2025 · 2 comments · Fixed by #3
Closed

Investigate the need to cache #2

abeldekat opened this issue Jan 1, 2025 · 2 comments · Fixed by #3
Assignees

Comments

@abeldekat
Copy link
Owner

In cmp-luasnip, a cache is maintained by ft, both for snippets and docs. That might very well be done for performance reasons. Otherwise, perhaps because LuaSnips supports snippets to be dynamically added/changed?

On each completion, all snippets do need to be transformed into results for nvim-cmp.... I am still unsure if caching should be added.

See this answer from echasnovski

@abeldekat abeldekat mentioned this issue Jan 1, 2025
4 tasks
@abeldekat abeldekat pinned this issue Jan 1, 2025
@echasnovski
Copy link

Following this comment, here a patch I used to profile default_prepare() with and without (comment out irrelevant parts) caching:

diff --git a/lua/mini/snippets.lua b/lua/mini/snippets.lua
index 8dafaef..59ba88d 100644
--- a/lua/mini/snippets.lua
+++ b/lua/mini/snippets.lua
@@ -993,11 +993,19 @@ end
 ---
 ---@return ... Array of snippets and supplied context (default if none was supplied).
 MiniSnippets.default_prepare = function(raw_snippets, opts)
+  local start_time = vim.loop.hrtime()
   if not H.islist(raw_snippets) then H.error('`raw_snippets` should be array') end
   opts = vim.tbl_extend('force', { context = nil }, opts or {})
   local context = opts.context
   if context == nil then context = H.get_default_context() end
 
+  local cache_id = context.buf_id .. context.lang
+  if H.prepare_cache[cache_id] then
+    table.insert(_G.durations, 0.000001 * (vim.loop.hrtime() - start_time))
+    return unpack(H.prepare_cache[cache_id])
+  end
+
   -- Traverse snippets to have unique non-empty prefixes
   local res = {}
   H.traverse_raw_snippets(raw_snippets, res, context)
@@ -1005,9 +1013,14 @@ MiniSnippets.default_prepare = function(raw_snippets, opts)
   -- Convert to array ordered by prefix
   res = vim.tbl_values(res)
   table.sort(res, function(a, b) return a.prefix < b.prefix end)
+  H.prepare_cache[cache_id] = { res, context }
+  table.insert(_G.durations, 0.000001 * (vim.loop.hrtime() - start_time))
   return res, context
 end
 
+H.prepare_cache = {}
+_G.durations = {}
+
 --- Default match
 ---
 --- Match snippets based on the line before cursor.

With benchmarking code present, open a separate nvim and do the expand <C-j> on blank line in Lua file 11 times (or more). Then :=durations will show how long default_prepare() took. For summary I use median from MiniMisc.stat_summary(): :=stat_summary(vim.list_slice(durations, 2)) (remove first entry as it reads data from the disk and thus visibly bigger).

Adding benchmarking code and then actually interactively performing the task to be benchmarked is usually a more accurate way of measuring things.

@abeldekat
Copy link
Owner Author

Use caching in the same way as implemented in this blink PR
Also, completion_item.documentation is reused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants