You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We noticed a lot of contention in FieldCacheImpl::Cache::Get (our queries use a lot of query time joins + sorting, so we hit the field cache a lot).
We use a SearcherManager with warm-up queries to populate the field cache so we would expect it to be initialized in most cases before we hit it for actual requests.
The implementation seems to lock even for the happy path (when everything's already initialized). This seems like a by-product of the choice of data structures (the underlying WeakDictionary, WeakHashMap etc are not threadsafe) and so the locking is required in case the dictionary gets resized.
Ideally we could be using thread-safe data structures and only lock when initializing the data.
While having a thread safe collection would be ideal, that is not how it was implemented in Lucene. However, I took a look at the implementation of WeakDictionary and it is dissimilar to that of the java.util.WeakHashMap that was used in the OpenJDK. In particular the CleanIfNeeded() method moves the entries from one dictionary instance to another every time it is called, which is less than ideal.
However, before it is attempted, it would be nice to have a reproduction case so it is possible to see whether we are indeed solving the issue or making it worse. Could you please provide one?
We have replaced WeakDictionary with ConditionalWeakTable in Lucene.NET 4.8.0-beta00007, but some of the APIs of ConditionalWeakTable that Lucene.NET requires are only available on .NET Standard 2.1.
If you are using a platform that supports .NET Standard 2.1, could you please check out whether this change resolves the issue you are experiencing?
We noticed a lot of contention in FieldCacheImpl::Cache::Get (our queries use a lot of query time joins + sorting, so we hit the field cache a lot).
We use a SearcherManager with warm-up queries to populate the field cache so we would expect it to be initialized in most cases before we hit it for actual requests.
The implementation seems to lock even for the happy path (when everything's already initialized). This seems like a by-product of the choice of data structures (the underlying WeakDictionary, WeakHashMap etc are not threadsafe) and so the locking is required in case the dictionary gets resized.
Ideally we could be using thread-safe data structures and only lock when initializing the data.
JIRA link - [LUCENENET-610] created by sthmathewThe text was updated successfully, but these errors were encountered: