-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Improve call counting mechanism #1457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
f36f0dd
3ca0185
39a025f
62f1d21
ef02925
348772d
76a0b20
9fddf98
2ba422e
6aa3c70
c32a562
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
- Call counting through the prestub is fairly expensive and can be seen immediately after call counting begins
- Added call counting stubs. When starting call counting for a method:
- A `CallCountingInfo` is created and initializes a remaining call count with a threshold
- A `CallCountingStub` is created. It contains a small amount of code that decrements the remaining call count and checks for zero. When nonzero, it jumps to the code version's native code entry point. When zero, it forwards to a helper function that handles tier promotion.
- When the call count threshold is reached, the helper call enqueues completion of call counting for background processing
- When completing call counting, the code version is enqueued for promotion, and the call counting stub is removed from the call chain
- Once all work queued for promotion is completed and methods transitioned to optimized tier, call counting stubs are deleted based on some heuristics and under runtime suspension
- The `CallCountingManager` is the main class with most of the logic. Its private subclasses are just simple data structures.
- Call counting is done at a `NativeCodeVersion` level (stub association is with the code version)
- The code versioning lock is used for data structures used for call counting. Since installing a call counting stub requires that we know what the currently active code version is, it made sense to use the same lock.
- Call counting stubs have hardcoded code. x64 has short and long stubs, short stubs are used when possible (often) and use IP-relative branches to the method's code and helper stub. Other platforms have only one type of stub (a short stub).
- For tiered methods that don't have a precode (virtual and interface methods), a forwarder stub (a precode) is created and it forwards to the call counting stub. This is so that the call counting stub can be safely and easily deleted. The forwarder stubs are only used when counting calls, there is one per method (not per code version), and they are not deleted. See `CallCountingManager::SetCodeEntryPoint()` for more info.
- The `OnCallCountThresholdReachedStub()` takes a "stub-identifying token". The helper call gets a stub address from it, and tells whether it's a short or long stub. From the stub, the remaining call count pointer is used to get the `CallCountingInfo`, and from it gets the `NativeCodeVersion` associated with the stub.
- The `CallCountingStubManager` traces through a call counting stub so that VS-like debuggers can step into a method through the call counting stub
- Exceptions (OOM)
- On foreground threads, exceptions are propagated unless they can be handled without any compromise
- On background threads, exceptions are caught and logged as before. Tried to limit scope of exception to one per method or code version such that a loop over many would not all be aborted by one exception.
- Fixed a latent race where a method is recorded for call counting and then the method's code entry point is set to tier 0 code
- With that order, the tiering delay may expire and the method's entry point may be updated for call counting in the background before the code entry point is set by the recording thread, and that last action would disable call counting for the method and cause it to not be optimized. The only thing protecting from this happening was the delay itself, but a configured shorter delay increases the possibility of this happening.
- Inverted the order such that the method's code entry point is set before recording it for call counting, both on first and subsequent calls
- Changed the tiered compilation lock to be an any-GC-mode lock so that it can be taken inside the code versioning lock, as some things were more naturally placed inside the code versioning lock where we know the active code version, like checking for the tiering delay to delay call counting and promoting the code version when the call count threshold is reached
- Unfortunately, that makes code inside the lock a GC-no-trigger scope and things like scheduling a timer or queuing a work item to the thread pool could not be done inside that scope. This tradeoff seems to be better than alternatives, so refactored those pieces to occur outside that scope.
- Publishing an entry point after changing the active code version now takes call counting into account, fixes https://github.com/dotnet/coreclr/issues/22426
- After the changes:
- Call counting overhead is much smaller and is not many orders of magnitude greater than a method call
- Some config modes for tuning tiering are now much more reasonable and do not affect perf negatively nearly as much as before - increasing call count threshold, disabling or decreasing the tiering delay. Enables dynamic thresholds in the future, which is not feasible due to the overhead currently.
- No change to startup or steady-state perf
- Left for later
- Eventing work to report call counting stub code ranges and method name (also needs to be done for other stubs)
- Some tests that consume events to verify run-time behavior in a few config modes
- Debugger test to verify debugging while call-counting. Debugger tests also need to be fixed for tiering.
- The call count threshold has not been changed for now. As we don't have many tests that measure the performance in-between startup and steady-state, some will need to be created maybe from existing tests, to determine the effects
- Fixes https://github.com/dotnet/coreclr/issues/23596- Loading branch information
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -330,11 +330,7 @@ to update the active child at either of those levels (ReJIT uses SetActiveILCode | |
| In order to do step 3 the `CodeVersionManager` relies on one of three different mechanisms, a `FixupPrecode`, a `JumpStamp`, or backpatching entry point slots. In [method.hpp](https://github.com/dotnet/coreclr/blob/master/src/vm/method.hpp) these mechanisms are described in the `MethodDesc::IsVersionableWith*()` functions, and all methods have been classified to use at most one of the techniques, based on the `MethodDesc::IsVersionableWith*()` functions. | ||
|
|
||
| ### Thread-safety ### | ||
| CodeVersionManager is designed for use in a free-threaded environment, in many cases by requiring the caller to acquire a lock before calling. This lock can be acquired by constructing an instance of the | ||
|
|
||
| ``` | ||
| CodeVersionManager::TableLockHolder(CodeVersionManager*) | ||
| ``` | ||
| CodeVersionManager is designed for use in a free-threaded environment, in many cases by requiring the caller to acquire a lock before calling. This lock can be acquired by constructing an instance of `CodeVersionManager::LockHolder`. | ||
|
|
||
|
||
| in some scope for the CodeVersionManager being operated on. CodeVersionManagers from different domains should not have their locks taken by the same thread with one exception, it is OK to take the shared domain manager lock and one AppDomain manager lock in that order. The lock is required to change the shape of the tree or traverse it but not to read/write configuration properties from each node. A few special cases: | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comments did a great job sketching out the design you ended up with, but I think the rationale as to why you arrived at this design as opposed to something different could be equally illuminating. Typically before making a change of this scale there would be some discussion about options or a write up in a doc so I'm not sure if I just missed that? At this point I am not expecting that we'd make large deviations from the approach in this PR unless we found a serious issue given how much effort I assume you have invested in it and the perf gains. However it would still be useful to understand among the other design options what has already been eliminated via thought experiment or performance testing and what is still interesting to experiment with in the future.
Some design alternatives that come to mind:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure thing. There hasn't been a doc, most of the discussions happened in-person.
The most expensive parts of reaching the call count threshold are:
#1 and #2 are now done in the background, otherwise they were showing up in the spike when methods reach the call count threshold. Doing or avoiding #2 is a tradeoff between some background work and some foreground work, I wasn't particularly trying to change the way it works currently and favored to avoid the extra foreground work.
Will talk about this separately
Separate stubs allow counting only tiered methods and at specific times. Changing an existing stub like a precode would increase the size unnecessarily.
At the moment it doesn't make much difference. If there were to be a tier 0.5 in the future then it would probably want to be counted separately from tier 0, and due to the unlocked counting it's not easy to deterministically reset the remaining call count. The stub could be per-MethodDesc instead but that would entail making it larger and slower such that it's patchable.
I did fine-grained synchronously first and after seeing the large spikes, changed it to coarse-grained asynchronous. It's not very fine-grained though, methods typically reach the call count threshold in bursts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forgot to add, I'm not familiar with Andy's JIT prototypes for OSR, but it may make sense to count calls in the jitted code for that. There is direct compensation in that jitting methods containing loops at tier 0 would decrease startup time, and since it's a loop it likely will be called at least a few times anyway and may be more worth optimizing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @kouvel! My goal was to capture your existing thinking on the topic while it is fresh and that has now been done : )