Skip to content

Conversation

@thomhurst
Copy link
Owner

Summary

  • Adds threshold-based optimization to use sequential processing for small test sets (< 8 items)
  • For small sets, task scheduling overhead exceeds parallelization benefits
  • Maintains parallel processing for larger test sets

Benchmarks

DataDrivenTests (18 tests):

Framework Before After Change
TUnit ~596ms ~540ms -56ms (-9.4%)
MSTest ~546ms ~553ms -

TUnit was ~9% slower than MSTest, now it's ~2% faster.

ScaleTests (150 tests):

  • TUnit: ~558ms
  • MSTest: ~677ms
  • Optimization doesn't negatively impact larger test sets

Changes

  • Add ParallelThresholds class with MinItemsForParallel = 8 constant
  • Modify BuildTestsFromMetadataAsync to check count before parallel processing
  • Modify AotTestDataCollector.CollectTestsAsync to use sequential for small source sets
  • Use sequential processing for dynamic tests (typically rare)

Test plan

  • All 290 TUnit.Engine.Tests pass
  • Benchmarked with DataDrivenTests (18 tests) - significant improvement
  • Benchmarked with ScaleTests (150 tests) - no regression

🤖 Generated with Claude Code

For small test sets (< 8 items), the overhead of parallel processing
(task scheduling, thread pool allocation, synchronization) exceeds
the parallelization benefits.

This change adds a threshold-based optimization that uses sequential
processing when the item count is below the threshold, while
maintaining parallel processing for larger test sets.

Benchmarks (DataDrivenTests with 18 tests):
- Before: TUnit ~596ms, MSTest ~546ms (TUnit ~9% slower)
- After: TUnit ~540ms, MSTest ~553ms (TUnit ~2% faster)
- Improvement: ~56ms (~9.4%)

Changes:
- Add ParallelThresholds class with MinItemsForParallel = 8
- Modify BuildTestsFromMetadataAsync to check count before processing
- Modify AotTestDataCollector to use sequential for small source sets
- Use sequential processing for dynamic tests (typically rare)

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@thomhurst
Copy link
Owner Author

Summary

Adds threshold-based optimization to use sequential processing for small test sets (< 8 items) to avoid task scheduling overhead.

Critical Issues

1. DUAL-MODE VIOLATION ❌

BLOCKING: This change violates TUnit's critical rule #1 - changes to core engine metadata collection MUST work in both source-gen AND reflection modes.

Problem: The optimization is only implemented in AotTestDataCollector (source-gen mode). The ReflectionTestDataCollector (reflection mode) at TUnit.Engine/Discovery/ReflectionTestDataCollector.cs does NOT have the same threshold-based optimization.

Required fix: Apply the same sequential processing logic to ReflectionTestDataCollector.CollectTestsAsync to ensure both modes behave identically.


2. Performance: Unnecessary Allocation in Hot Path ⚠️

Location: AotTestDataCollector.cs:53 and TestBuilderPipeline.cs:153

The PR adds .ToList() calls to check the count before deciding whether to parallelize:

var testSourcesList = testSourcesByType.SelectMany(kvp => kvp.Value).ToList();
var metadataList = testMetadata as IList<TestMetadata> ?? testMetadata.ToList();

Issue: This allocates a List in the hot path (test discovery) for EVERY test run, regardless of size. For large test suites (>8 items), this allocation overhead may negate the benefit of the optimization.

Suggestion: Consider these alternatives:

  1. Use TryGetNonEnumeratedCount() (.NET 6+) to avoid materialization when possible
  2. Take the first N+1 items and check if there's an (N+1)th item, then decide
  3. Profile to verify the List allocation cost < task scheduling cost for your threshold

Previous Review Status

Could not fetch previous comments (API permissions), but based on the PR description this appears to be an initial submission.

Verdict

⚠️ REQUEST CHANGES - Critical dual-mode violation must be fixed before merge.

@thomhurst thomhurst merged commit 9285b61 into main Jan 11, 2026
13 checks passed
@thomhurst thomhurst deleted the perf/sequential-processing-for-small-test-sets branch January 11, 2026 12:37
This was referenced Jan 12, 2026
This was referenced Jan 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants