Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto-completion is often very slow #6160

Open
fgouget opened this issue Jan 8, 2025 · 6 comments
Open

Auto-completion is often very slow #6160

fgouget opened this issue Jan 8, 2025 · 6 comments

Comments

@fgouget
Copy link

fgouget commented Jan 8, 2025

What

  • The auto-completion process for drop-down lists is often takes over 30 seconds which seriously hampers use of the application.
  • The issue is not systematic. Furthermore there is no auto-completion when offline. Together these two points suggest the auto-completion process is querying data from the server and is thus slow whenever the server is slow to reply. Making the server faster would alleviate the issue but the completion data should really be cached in the application.

Steps to reproduce the behavior

  1. Edit a product and go to packaging components (Composants d'emballage)
  2. Go to a field like shape (Forme), material (Matière) or recycling instruction (Consigne de tri).
  3. Start typing a value like "bott", "plas" or "recy" respectively.
  4. The spinner appears in the edit field and keeps spinning for tens of seconds. During that time the suggestions list remains empty or does not take into account the latest content of the edit box.

This also impacts other parts of the application like the label list, categories, etc.

Expected behavior

To be useful the auto-completion should take a under a second.

Why

  • Delaying the auto-completion prevents the user from completing the input unless they know the exact wording and capitalization of the entry they want.
  • Without the auto-completion the user will just provide random variations on the expected values like "Transparent Polyethylene", "Transparent - PET", "transparent - pet", "PET : Transparent" instead of the expected "PET - transparent" (guessed from the French translation).
  • Auto-completion delays have a compounding effect : a 10 second delay translates into a 90 seconds delay in filling the packaging components for a typical product with just three packaging elements 3 (sleeve, container, lid) * 3 (shape, material, recycling) * 10 seconds. This seriously hampers the usability of the application.

Smartphone model

  • Device: Galaxy S7
  • OS: Android 8
  • App Version: 4.17.1
  • Language : French (the auto-completion provides localized suggestions and may thus also be looking up translations online)
  • Internet connectivity : WiFi then 1Gbps fiber
@monsieurtanuki
Copy link
Contributor

A solution would be to cache all values, or previously suggested values, if possible.
Or values previously selected by the user.

@teolemon
Copy link
Member

teolemon commented Jan 9, 2025

Eventually, we'll switch to Search-A-Licious for increased perf, but the backend is not quite ready yet.

@teolemon
Copy link
Member

@monsieurtanuki what's the current strategy in terms of keystrokes ?

@monsieurtanuki
Copy link
Contributor

@monsieurtanuki what's the current strategy in terms of keystrokes ?

I think:

  • we have a minimum of 3 characters before we start looking for something
  • we don't prevent the user from typing while we're searching

Does that answer your question?

@alexgarel
Copy link
Member

@monsieurtanuki, we are trying to improve server performance those days, we have created a "priority" server, to isolate search and facets requests (that are slow) in a specific server, so that they don't make every other requests slow.

Today I made suggestions goes to the priority server.

That said, I think it would be better if, appart from waiting 3 characters:

  • the app would also wait for 1s (or 500ms) without any character added before triggering the suggest request
  • if the user add some input, the app would not show suggestions returned by a previously emitted suggestion request

I don't know what is the current status around that ? Is this the way it already behaves ?

Adding some cache is also a good idea, so that if I typed: choco, then chocox, then correct to have choco again, my previous suggestions are shown instanly. But this is more of a strech goal !

As @teolemon already said, we also hope to improve suggestion requests latency soon, using search-a-licious (but soon is not soon enough :-P, unless we get some new hepl)

@monsieurtanuki
Copy link
Contributor

Hi @alexgarel!

  • the app would also wait for 1s (or 500ms) without any character added before triggering the suggest request

Not implemented, but makes sense.

  • if the user add some input, the app would not show suggestions returned by a previously emitted suggestion request

I believe it's somehow already implemented, to be confirmed.

Adding some cache is also a good idea, so that if I typed: choco, then chocox, then correct to have choco again, my previous suggestions are shown instanly. But this is more of a strech goal !

I guess we could cache the results, at least at the app session level. (e.g. in static variables)
We could even show immediate results with cached values but keep the 1s delay for the search on the server.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: 💬 To discuss and validate
Development

No branches or pull requests

4 participants