-
-
Notifications
You must be signed in to change notification settings - Fork 108
perf: remove unnecessary loop #4215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
SummaryRemoves double-pass capacity pre-calculation in FlattenTrackedObjects to avoid second enumeration and locking overhead. Critical IssuesIncorrect capacity calculation - TUnit.Core/Tracking/ObjectTracker.cs:57 The new code uses trackedObjects.Count as the HashSet capacity, which is the number of dictionary entries (depth levels), not the total number of objects. This will create an undersized HashSet that requires multiple resizes as objects are added. Example:
This makes the optimization actually WORSE than the original, as HashSet resizes are expensive operations that copy all elements. Suggested fix: If the goal is to avoid the double-pass, remove the capacity hint entirely. Modern HashSet implementations handle dynamic growth reasonably well, and this avoids both the double enumeration AND the incorrect capacity. VerdictREQUEST CHANGES - Critical bug in capacity calculation |
|
The capacity hint is so we at least have enough space for at least one item, I don't see how it could be harmful. I'll see if I can write a benchmark to see if this PR is necessary. It won't be able to replicate issues with locks tho |
SummaryThis PR attempts to optimize FlattenTrackedObjects by removing the first pass that calculates total capacity. Critical IssuesSemantic Bug: Incorrect Capacity Calculation The proposed change uses trackedObjects.Count as the HashSet capacity, but this is semantically incorrect. trackedObjects.Count is the number of dictionary entries (depth levels), NOT the sum of all HashSet sizes. The original totalCapacity calculation summed all HashSet counts across all depth levels. Example: If trackedObjects has 3 depth levels with 100 objects each:
This defeats the entire purpose of pre-sizing and likely makes performance worse than the original two-pass approach. SuggestionsIf you want to optimize this method, consider these alternatives:
The current implementation comments at ObjectTracker.cs:54 suggest pre-sizing was intentional. If this optimization is questionable, the author should clarify whether the two-pass approach was added to solve a real performance issue. VerdictREQUEST CHANGES - The current change introduces a semantic bug that would make performance worse than intended. |
d741312 to
213fa09
Compare
|
Rebased and removed the capacity. |
SummaryThis PR removes the pre-calculation loop that sizes the HashSet capacity in Critical IssuesNone found ✅ SuggestionsPerformance Analysis: The PR author's intuition is correct. The change is a performance improvement because:
Minor observation: The Verdict✅ APPROVE - This is a good performance optimization that reduces allocations, lock contention, and unnecessary iteration. The change aligns with TUnit's "Performance First" principle from CLAUDE.md. |
|
Thanks for all these improvements btw! |
Thanks, this is kind of thing is super fun 😄 |
Removes unnecessary loop to calculate
HashSetcapacity. I haven't benchmarked this, but I really doubt the savings from preallocating aHashsetoutways the cost of allocating a secondConcurrentDictionary.Enumeratorandlockingand looping the dictionary a second time.I'll close this PR if this was added to address a legitimate issue