Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spinlock Usage #12

Open
ibraheemdev opened this issue Jan 14, 2025 · 4 comments
Open

Spinlock Usage #12

ibraheemdev opened this issue Jan 14, 2025 · 4 comments

Comments

@ibraheemdev
Copy link

ibraheemdev commented Jan 14, 2025

Unless I'm missing something, this library seems to rely heavily on a pure spinlock. This implementation problematic for a number of reasons:

  • Spinlocks tend to perform very well in benchmarks but are not a great fit for most real world code, where efficiency is an important consideration over pure throughput.
  • This is especially problematic in oversubscribed scenarios, as the spinlock does not even attempt to yield to other threads and instead reschedules itself immediately, which can starve the thread holding the lock. Note that yielding would not be much better — unbounding spinning is almost always a problem and you should fallback to parking.
  • The spinlock spins on try_read, which is typically an RMW operation. This is very inefficient and introduces unnecessary contention even when the lock state has not changed, without attempting to yield or perform backoff.
  • This can also lead to priority inversion in custom scheduler setups.
  • Crucially, the map is specifically designed to allow locks to be held across .await points, which is exactly when a spinlock will fall apart.
  • On a single-threaded runtime, this will either lead to useless spinning until the cooperative budget is exhausted (on tokio for example), or a plain old deadlock.

Given that the benchmarks do not mention the use of spinlocks at all, I find the results very misleading. The spinlocks should be switched out for a real asynchronous lock, such as tokio::sync::RwLock.

@willothy
Copy link
Member

This was discussed on our hackernews post quite a bit. The spinlock usage was due to my own misunderstanding of whether this counts as a spinlock due to queueing of tasks on the tokio global queue. A fair queueing system for reads and writes is in-progress but I haven't had the time lately to finish it up. Maybe I should replace the spinlocks with tokio::sync::RwLock for the time being, you're right.

I'm sorry you find the benchmarks misleading, hopefully you can understand that this was due to a misunderstanding and not the intent to mislead :)

@willothy
Copy link
Member

Btw I've used papaya for a few personal projects since you mentioned it on Reddit and it's quite nice!

@ibraheemdev
Copy link
Author

A fair queueing system for reads and writes is in-progress but I haven't had the time lately to finish it up.

Glad to hear this! I'm a bit apprehensive towards projects claiming to be super performant while relying heavily on spinlocks because the authors tend to remain convinced that spinlocks are not a problem. If this was just a misunderstanding and there is work to fix there's no harm done :)

Maybe I should replace the spinlocks with tokio::sync::RwLock for the time being, you're right.

This is probably a good idea yeah. As I mentioned the spinlock here is particularly problematic, and I do think that there is a use case for a better tokio::RwLock<HashMap>, just like dashmap is a better RwLock<HashMap>. I am interested in how you are eventually planning to do better than just an async lock though.

@willothy
Copy link
Member

I guess what I'd end up with is probably just a reimplementation of an async RwLock - I'll hack around and see what I come up with but for now I swapped the std RwLock with the one from tokio.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants