-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-34637] [SQL] Support DPP + AQE when the broadcast exchange can be reused #31756
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
3bc4baf
657c61b
c7b3735
6b07c84
4ccd4b8
701f1c3
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
- Loading branch information
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -27,8 +27,7 @@ import org.apache.spark.sql.execution.joins.{BroadcastHashJoinExec, HashedRelati | |
| /** | ||
| * A rule to insert dynamic pruning predicates in order to reuse the results of broadcast. | ||
| */ | ||
| case class PlanAdaptiveDynamicPruningFilters( | ||
| originalPlan: SparkPlan) extends Rule[SparkPlan] { | ||
| case class PlanAdaptiveDynamicPruningFilters(rootPlan: SparkPlan) extends Rule[SparkPlan] { | ||
| def apply(plan: SparkPlan): SparkPlan = { | ||
| if (!conf.dynamicPartitionPruningEnabled) { | ||
| return plan | ||
|
|
@@ -40,20 +39,20 @@ case class PlanAdaptiveDynamicPruningFilters( | |
| adaptivePlan: AdaptiveSparkPlanExec), exprId, _)) => | ||
| val packedKeys = BindReferences.bindReferences( | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we can move this into |
||
| HashJoin.rewriteKeyExpr(buildKeys), adaptivePlan.executedPlan.output) | ||
| val mode = HashedRelationBroadcastMode(packedKeys) | ||
| // plan a broadcast exchange of the build side of the join | ||
| val exchange = BroadcastExchangeExec(mode, adaptivePlan.executedPlan) | ||
|
|
||
| val canReuseExchange = conf.exchangeReuseEnabled && buildKeys.nonEmpty && | ||
| originalPlan.find { | ||
| rootPlan.find { | ||
| case BroadcastHashJoinExec(_, _, _, BuildLeft, _, left, _, _) => | ||
| left.sameResult(adaptivePlan.executedPlan) | ||
| left.sameResult(exchange) | ||
| case BroadcastHashJoinExec(_, _, _, BuildRight, _, _, right, _) => | ||
| right.sameResult(adaptivePlan.executedPlan) | ||
| right.sameResult(exchange) | ||
| case _ => false | ||
| }.isDefined | ||
|
|
||
| if(canReuseExchange) { | ||
| val mode = HashedRelationBroadcastMode(packedKeys) | ||
| // plan a broadcast exchange of the build side of the join | ||
| val exchange = BroadcastExchangeExec(mode, adaptivePlan.executedPlan) | ||
| if (canReuseExchange) { | ||
| exchange.setLogicalLink(adaptivePlan.executedPlan.logicalLink.get) | ||
| val newAdaptivePlan = AdaptiveSparkPlanExec( | ||
cloud-fan marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| exchange, adaptivePlan.context, adaptivePlan.preprocessingRules, true) | ||
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need use the
initialPlannot theinputPlan, because theinputPlanis not applied thequeryStagePreparationRules(EnsureRequirements).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's better to pass
thisas the root plan.AdaptiveSparkPlanExeckeeps changing when more and more query stages are completed. So it's better thatPlanAdaptiveDynamicPruningFiltersalways look at the latest plan.