-
Notifications
You must be signed in to change notification settings - Fork 187
Run fixpoint per strongly connected component #999
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for salsa-rs canceled.
|
CodSpeed Performance ReportMerging #999 will degrade performances by 14.95%Comparing Summary
Benchmarks breakdown
|
Hmm, I'm surprised that this is even slower than #995, considering that most logic now lives directly in Either way. I think there are some optimization opportunities:
|
a2b3da6
to
058fa5b
Compare
// If the value is from the same revision but is still provisional, consider it changed | ||
// because we're now in a new iteration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I always found the deep_verify_memo
logs for provisional values confusing. Exiting early here ensures they don't show up
ab90dd2
to
dbb3eb0
Compare
The amortized benchmark results make no sense to me... I didn't touch |
Fix serde attribute
43550e0
to
2071c7f
Compare
3ef0cbe
to
81137dd
Compare
06826c6
to
d6f0f20
Compare
I found one more bug. We hit I'm not entirely sure yet what's happening |
@MichaReiser Should I review this now, or better wait for further exploration of this new bug? |
This PR rewrites how Salsa runs fixpoint iterations when multiple cylce heads participate in a cycle (when there are multiple nested cycles).
Motivation
The main motivation for this is that we see a runaway in ty for code like this:
Or as a graph
Today, we'll run separate fixpoint iterations for
y1 <-> y2
,y3 <-> y1
, andz1 <-> z2
.The downside of this is that Salsa runs the nested fixpoint iterations
y1<->y3
to convergence for everyy1 <-> y2
iteration. If there are more deeply nested cycles, then each of those inner cycles is run to completion for everyy1 <-> y2
andy1 <-> y3
iteration, making the number of iterations dependent on the nesting of queries rather than how quickly the problem converges.To avoid this, the idea is to identify the strongly connected components in the dependency graph. These are:
x1
(no dependencies)x2
(no dependencies)y1
,y2
,y3
z1
,z2
(only depends ony1
in one direction but not both)Then, run a fixpoint per strongly connected component. That means, we only run 2 fixpoint iterations for the example above. This ensures that the runtime is bound by how quickly the problem converges and not by how deeply nested our queries are (which creates a tension between granular queries and performance).
Single-threaded implementation
Let's ignore multi-threading first as this only complicates things ;)
The change in itself isn't that complicated:
execute_maybe_iterate
: If the query is part of an outer cycle, then don't remove the cycle head fromcycle_heads
, set a flag on the memo indicating whether this head has converged, and return the provisional value. The query will iterate as part of the outer cyclememo.cycle_heads
) have convergedThat's roughly it :)
Multithreading
Where things get complicated is multithreading. I tried to keep the existing locking (where we wait on cycle heads in some places), but I wasn't able to make it work (there were spurious errors till the end). That's why I ended up rewriting cycle head synchronization from scratch.
What makes cycles tricky is that multiple threads can enter the same cycle simultaneously, each starting from a different cycle head that ultimately resolves to the same outer cycle. The new implementation makes this even trickier because inner cycles now complete immediately without iterating, which results in them releasing their locks. That allows other threads to claim those locks again, which can result in them competing for the same locks forever.
This PR extends Salsa's
DependencyGraph
with a mechanism to transfer the ownership of a query to another query. Let's say we havea -> b -> a
andb -> c -> b
. The inner cycle head can transfer its lock to the querya
.b
then remains locked untilc
completes (successfully or panicking). Only the thread owninga
(because it's the thread holding that lock or it's another thread on whicha
is blocked on) is allowed to reclaimb
.This ensures that the locks of all cycle heads participating in a query ultimately converge to be owned by a single query/thread. We need to do this for every query that supports cycle handling, even if it isn't a cycle head, because the query could become a cycle head in later iterations (e.g., due to changes in the cycle-entry point).
I also found this approach much easier to reason about than the old approach of "just wait for all cycle heads".
However, there's a downside. It requires acquiring the global
DependencyGraph
lock and tracking dependencies whenever claiming or releasing any query with cycle handle support that participates in a cycle. We incur this cost even when all queries participating in the cycle run on the same thread; this is also the main reason whyconverge-diverge
's performance regresses. I'm not sure if there's a solution to avoid this.TODO:
maybe_changed_after
cycle_nested_deep_conditional_changed::the_test
sometimes hangs or panics