Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize SoftReference caches using ConcurrentHashMap's compute #3641

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

lolgab
Copy link
Member

@lolgab lolgab commented Oct 1, 2024

@bjaglin raised a great point about concurrent initialization in mill-scalafix: joan38/mill-scalafix#206 (comment)
In ZincWorkerImpl were using synchronized to avoid that, but the built-in compute method in java.util.concurrent.ConcurrentHashMap is probably better.

Pull Request: #3641

@@ -708,4 +685,36 @@ object ZincWorkerImpl {
toAnalyze = List((List(start), deps(start).toList))
).reverse
}

private class AutoCloseableCache[A, B <: AnyRef] extends Cache[A, B] {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this called AutoCloseableCache? Did you forget a with AutoCloseable?

Also, I'd prefer to have Cache defined before its derivatives.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I want to express is a cache that on clear also tries to find AutoCloseable values and close them. I don't what is the best name for it. Do you have suggestions? I will change the order.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of a sub-class with a unlucky name we could add a handleAutoClosables: Boolean parameter to clear.

@lolgab lolgab requested a review from lefou October 1, 2024 12:39
@lolgab lolgab marked this pull request as ready for review October 1, 2024 13:17
@lihaoyi
Copy link
Member

lihaoyi commented Oct 2, 2024

I'm personally unsure about the motivation here. Are we seeing performance bottlenecks that we think will be solved by this optimization? How will we know that this PR actually achieves what we hope it will?

There's always a risk of bugs when making changes to this kind of concurrent code, both nondeterminsitic semantic bugs and performance bugs, and the risk typically goes up as your concurrency model gets more fine-grained. If we're not seeing any real bottlenecks, I would prefer to have coarse-grained synchronized blocks and locks over fine-grained concurrent data structures, the more coarse-grained the better

@lolgab
Copy link
Member Author

lolgab commented Oct 2, 2024

You are right, there is no particular performance reason behind this change. I should have named the PR Simplify... instead of Optimize...
Since I learned that we have a standard library function to do what we were manually doing (initialization of a cache disallowing concurrent creation), I thought it would simplify the code to use the std library function instead of reinventing it in Mill.
Some problems I noticed while making the PR:

  • ScalaJSWorkerImpl uses a mutable.Map which is not thread-safe. What does it happen if two threads try to link Scala.js code at the same time? Its initialization doesn't use a synchronized block.
  • javaOnlyCompilersCache uses another mutable.Map which is not thread-safe and doesn't wrap initialization with synchronized
  • not sure about this one, but classloaderCache uses a non-thread-safe implementation. Maybe it's not a problem, but if you call clear and getOrElseCreate at the same time from two threads. Probably not possible at all.

A possibly problematic behavioral change of this PR that I haven't noticed until now:
Before it was using a LinkedHashMap, now a ConcurrentHashMap. Don't know if maintaining the order of insertion is important when we close classloaders. Is it?

@lihaoyi
Copy link
Member

lihaoyi commented Oct 2, 2024

I agree that there are probably tons of concurrency issues in the current implementation, but we still need to be clear about what we are trying to achieve here. Just dropping in Concurrent* data structures does nothing to make piece of code thread-safe or race-free, and unlike dealing with single-threaded correctness the bar of "seems to work and passes unit tests" tells us basically nothing about thread-safety.

To be clear, I think using ConcurrentHashMap sounds great, and could be a path to code that is both more cleaner and correct. But the PR description needs to be a lot more rigorous in identifying what the current problems are and arguing why the new approach fixes it

case (_, v @ SoftReference(_)) => v
case _ => SoftReference(create)
}
)()
Copy link

@bjaglin bjaglin Oct 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't there a tiny risk that the instance returned by create (or the existing one) is garbage-collected by the time the SoftReference is dereferenced? Having a var to maintain a strong reference throughout the lifetime of getOrElseCreate feels safer.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this doesn't look right. The scheme that I used in Soft (which is a SoftReference that can recreate itself if it's needed after it's cleared) is better: https://github.com/Ichoran/kse3/blob/5a900ad9578008f0f3454c759cb83e73ec0e21a8/flow/src/Cached.scala#L105-L154

.values()
.iterator()
.asScala
.foreach { case SoftReference(v: AutoCloseable) =>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

foreach doesn't take a partial function, so we should provide a catch-all case. Not doing so will fail for non-Autoclosable entries.

@lihaoyi
Copy link
Member

lihaoyi commented Oct 14, 2024

Opened a ticker to centralize discussion of the more general issue here #3730

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants