-
Notifications
You must be signed in to change notification settings - Fork 150
Reduce number of frameworks tested in PRs #3511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Datadog ReportBranch report: ✅ |
This comment has been minimized.
This comment has been minimized.
tonyredondo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
pierotibou
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot
This comment has been minimized.
This comment has been minimized.
3cee051 to
93f0bbe
Compare
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Benchmarks Report 🐌Benchmarks for #3511 compared to master:
The following thresholds were used for comparing the benchmark speeds:
Allocation changes below 0.5% are ignored. Benchmark detailsBenchmarks.Trace.AgentWriterBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.AppSecBodyBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.AspNetCoreBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.DbCommandBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.ElasticsearchBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.GraphQLBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.HttpClientBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.ILoggerBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.Log4netBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.NLogBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.RedisBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.SerilogBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.SpanBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.TraceAnnotationsBenchmark - Same speed ✔️ Same allocations ✔️Raw results
|
Code Coverage Report 📊✔️ Merging #3511 into master will will increase line coverage by
View the full report for further details: Datadog.Trace Breakdown ✔️
The following classes have significant coverage changes.
32 classes were removed from Datadog.Trace in #3511 View the full reports for further details: |
OmerRaviv
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Summary of changes
net461,netcoreapp2.1,netcoreapp3.1,net7.0. See below for rationale.Reason for change
We have been seeing a lot of flakiness in general, and although we're working to tackle the source of the flakiness, the sheer number of tests we run means even a tiny percentage chance of flake appears more often then not. By reducing the number of test permutations we run, it should reduce the number of times the pipeline errors.
Additionally, we recently added another test framework to our matrix (
net7.0). As we continue to support out-of-support versions of frameworks (e.g. .NET Core2.1 and .NET Core 3.0), our test matrix will continue to grow. This adds an increasing burden on our CI infrastructure, as we use more and more machines. We need to keep this in check, while not sacrificing reliability.Also, we realised that the fix for
net461code coverage introduced in #3482 was not working for unit tests. This is because theFrameworkvariable isn't set in that case and we're callingdotnet testinstead ofdotnet test --framework {Framework}.Implementation details
This PR takes a simple approach - we simply don't run tests on a subset of target frameworks.
dotnet testand passing in--framework, instead of letting the CLI test all frameworks. Note that this was also necessary to fix the code coverage bug.Added a parameter to Nuke
IncludeAllTestFrameworks. Whentruethis runs against all frameworks, as we do currently. This is the default if not overridden.In CI, we set this to
truefor merges to master. For PRs and for scheduled builds, this isfalse. You can override it, and run a full test on a branch by settingrun_all_test_frameworks=truewhen scheduling the build in Azure Devops. Consider doing this whenever you add a new integration, or make a change that you expect to effect different frameworks differentlyI chose the following test frameworks to be run on every PR, as they test each of the generated dlls we produce for Datadog.Trace:
net461for the .NET FX dllnetcoreapp2.1for thenetstandard2.0dllnetcoreapp3.1for thenetcoreapp3.1dllnet7.0for thenet6.0dllWhich means we no longer test the following:
netcoreapp3.0has been out of support for a long time, and there are only a couple of (pretty obsolete by now) ASP.NET Core paths where it branches differently from thenetcoreapp2.1behaviournet5.0is out of support, uses the same dll asnetcoreapp3.1, and doesn't (AFAIK) have any behaviour not covered bynetcoreapp3.net6.0is still in support, but should use the same paths asnet7.0in general.net7.0support is still new, so thought we wanted to keep testing that more intensely, to flush out any issues.I only removed these stages for integration/unit tests where we test them all. For the smoke tests I continued to test all (as we already run a reduced set in PRs). For benchmark/azure functions/lambda where we already only test a limited set of frameworks, I left them unchanged.
Test coverage
Other details
I originally investigated a more detailed approach, dynamically changing the
<TargetFrameworks>based on theIncludeAllTestFrameworksvariable. This has the advantage that it can significantly reduce theRestore(and some build stages) which speeds up the build.Unfortunately, this is made complicated by the complex web of samples and dependency projects we have. Some samples always need building, regardless of the targeted frameworks. And as some stages still use the excluded frameworks (e.g. benchmarks, azure functions), it's all quite confusing. Chose to park this approach for now, and we can revisit later if we see fit (potentially revisiting the whole sample-building process etc).