-
Notifications
You must be signed in to change notification settings - Fork 5.2k
Reliability fixes for the Thread Pools #121887
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
I think this is ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR addresses critical reliability issues in three thread pool implementations by reverting problematic changes from #100506 that led to deadlocks in .NET 9. The fix restores a simpler and safer enqueuer/worker handshake protocol that guarantees work items will always have a worker thread available to process them.
Key Changes:
- Simplified the
QueueProcessingStageenum by removing theDeterminingstate, leaving onlyNotScheduledandScheduled - Changed the worker thread protocol to reset the processing stage to
NotScheduledbefore checking for work items (preventing a race condition window) - Removed complex retry/dequeue logic and the
_nextWorkItemToProcessoptimization in favor of always requesting an additional worker when processing an item
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
| src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs | Simplified general-purpose ThreadPool enqueuer/worker handshake by removing Determining state, _nextWorkItemToProcess field, and complex retry logic; streamlined Dispatch() to always request a worker after dequeuing an item; |
| src/libraries/System.Net.Sockets/src/System/Net/Sockets/SocketAsyncEngine.Unix.cs | Applied same handshake simplification to Unix socket async engine; removed UpdateEventQueueProcessingStage() method; simplified Execute() to use consistent pattern of resetting state before checking queue |
src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Net.Sockets/src/System/Net/Sockets/SocketAsyncEngine.Unix.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Net.Sockets/src/System/Net/Sockets/SocketAsyncEngine.Unix.cs
Show resolved
Hide resolved
src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs
Outdated
Show resolved
Hide resolved
…ToHighPriorityGlobalQueue. them
Co-authored-by: Copilot <[email protected]>
|
/benchmark plaintext,json,fortunes aspnet-citrine-lin runtime,libs |
|
Benchmark started for plaintext, json, fortunes on aspnet-citrine-lin with runtime, libs. Logs: link |
|
An error occurred, please check the logs |
|
Is this just a revert of the earlier change? Looks like that had some perf improvements so we might notice some regressions due to revert. |
Yes, it is a partial revert.
There were several changes in the original PR and the result was some improvements and also couple regressions. This PR reverts only the part that is important for correctness. Not sure how much that was contributing. |
|
There are ways to make threadpool less eager with introducing workers. But that would need to be in the part that actually controls the introducing of threads - around the LIFO semaphore and the logic that controls it. In the airport parking lot shuttle analogy - The parking lot decides how many shuttles they have in rotation. But this flag is just calling them and telling that a traveler has arrived and needs to be picked up, somehow, eventually... |
|
The changes look good to me, actually the current logic to handle thread requests is quite complicated so it was hard to be 100% sure that it worked correctly. However since the current thread request handling was introduced to deal with some issues around using more CPU in certain scenarios that we may want to address after this PR is merged, should we add a test similar to the one in #121608 such that it's more likely to detect if work items are not getting picked up? |
The repro involves two processes: a client and a server. They may run for quite a while before hanging - could be minutes. And the time-to-hang appears can depend on CPU manufacturer/model or number of cores. It is a good real-world-like sample for running locally as a stress test, but hardly useful as a CI test. I think we have some existing tests that look for TP items not being picked up, but since this issue requires rare races, it could be that tests cannot detect it on the kind of hardware the lab runs them (or catch very rarely). Maybe we should think of some kind of "stress suite" for the thread pool. At least keep a collection of apps known to have stress issues in the past. Like this example. |
|
I was curious about what perf effect here could be and tried using It is not blocking, but it seems the right tool for this kind of queries. |
|
I'll try one more time with benchmar. Maybe it was some transient infra issue. |
|
/benchmark plaintext,json,fortunes aspnet-citrine-lin runtime,libs |
|
Benchmark started for plaintext, json, fortunes on aspnet-citrine-lin with runtime, libs. Logs: link |
| // A new work item may be added right before the flag is reset | ||
| // without asking for a worker, while the last worker is quitting. | ||
| Scheduled | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With just two not-scheduled / scheduled values now, do we even still need the enum? Could it just be a scheduled bool?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can do a bool.
I use enum mostly to be able to comment on states. But a bool can have comments too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've replaced the enum with an int, as it was in early net9 time. A bool turned out somewhat inconvenient:
- We do Interlocked operations with this variable and by looking at the implementation interlocked with
boolmay have less optimal thanintimplementations, depending on platform. Here we want to be fairly efficient. - In one case the variable is padded into a cacheline-sized struct and there is a pattern used to pad
int32fields that is used in many places, but not forbool.
| _eventQueueProcessingStage = EventQueueProcessingStage.Determining; | ||
| Interlocked.MemoryBarrier(); | ||
| // Checking for items must happen after resetting the processing state. | ||
| Interlocked.MemoryBarrier(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need a full barrier here? And if yes, could this instead be:
Interlocked.Exchange(ref _eventQueueProcessingStage, EventQueueProcessingStage.NotScheduled);?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need a full barrier here?
We are ordering a read after write here. That needs a full barrier.
And if yes, could this instead be:
I think I copied this from the original code in the ThreadPool, but I think MemoryBarrier is used here intentionally.
The _eventQueueProcessingStage is a shared location. Interlocked operation could get relatively quite expensive if there is sharing.
I think in general, if you have an interlocked operation into a shared location anyways, you can rely on its ordering sideeffects (like we do in EnsureWorkerScheduled*), but you'd not introduce such operation just for ordering reasons.
An ordinary write to shared location + explicit fence might be cheaper. (in a worst case the same)
--
*if EnsureWorkerScheduled is always ordered after insertion of an item in the work queue, it might be able to first check if the flag is already set with an ordinary read.
We can try more optimization in later net11-only changes.
| workItem = Interlocked.Exchange(ref workQueue._nextWorkItemToProcess, null); | ||
| } | ||
| // The state change must happen before sweeping queues for items. | ||
| Interlocked.MemoryBarrier(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same Interlocked.Exchange question
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is how this was done in net9, before the change that introduced Determining
runtime/src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPoolWorkQueue.cs
Lines 590 to 591 in bc0380f
| _separated.hasOutstandingThreadRequest = 0; | |
| Interlocked.MemoryBarrier(); |
I think all three similar cases should unify on this pattern, except - no need to do it in an AggressiveInlining function if there is only one call to it.
When inserting workitems into threadpool queues, we must always guarantee that for every workitem there will be some worker at some point in the future that will certainly notice the presence of the workitem and executes it.
There was an attempt to relax the requirements in #100506.
Sadly, it leads to occasional deadlocks when items are present in the work queues and no workers are coming to pick them up.
The same change was made in all 3 threadpools - IO completion, Sockets and the general purpose ThreadPool. The fix is applied to all three threadpools.
We have seen reports about deadlocks when running on net9 or later releases:
The fix will need to be ported to net10 and net9. Thus this PR tries to restore just the part which changed the enqueuers/workers handshake algorithm.
More stuff was piled up into threadpool since the change, so doing the minimal fix without disturbing the rest is somewhat tricky.
Fixes: #121608 (definitely, I have tried with the repro)
Fixes: #119043 (likely, I do not have a repro to try, but symptoms seem like from the same issue)