Skip to content

Commit 5b97165

Browse files
committed
[SPARK-36444][SQL] Remove OptimizeSubqueries from batch of PartitionPruning
### What changes were proposed in this pull request? Remove `OptimizeSubqueries` from batch of `PartitionPruning` to make DPP support more cases. For example: ```sql SELECT date_id, product_id FROM fact_sk f JOIN (select store_id + 3 as new_store_id from dim_store where country = 'US') s ON f.store_id = s.new_store_id ``` Before this PR: ``` == Physical Plan == *(2) Project [date_id#3998, product_id#3999] +- *(2) BroadcastHashJoin [store_id#4001], [new_store_id#3997], Inner, BuildRight, false :- *(2) ColumnarToRow : +- FileScan parquet default.fact_sk[date_id#3998,product_id#3999,store_id#4001] Batched: true, DataFilters: [], Format: Parquet, PartitionFilters: [isnotnull(store_id#4001), dynamicpruningexpression(true)], PushedFilters: [], ReadSchema: struct<date_id:int,product_id:int> +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, true] as bigint)),false), [id=#274] +- *(1) Project [(store_id#4002 + 3) AS new_store_id#3997] +- *(1) Filter ((isnotnull(country#4004) AND (country#4004 = US)) AND isnotnull((store_id#4002 + 3))) +- *(1) ColumnarToRow +- FileScan parquet default.dim_store[store_id#4002,country#4004] Batched: true, DataFilters: [isnotnull(country#4004), (country#4004 = US), isnotnull((store_id#4002 + 3))], Format: Parquet, PartitionFilters: [], PushedFilters: [IsNotNull(country), EqualTo(country,US)], ReadSchema: struct<store_id:int,country:string> ``` After this PR: ``` == Physical Plan == *(2) Project [date_id#3998, product_id#3999] +- *(2) BroadcastHashJoin [store_id#4001], [new_store_id#3997], Inner, BuildRight, false :- *(2) ColumnarToRow : +- FileScan parquet default.fact_sk[date_id#3998,product_id#3999,store_id#4001] Batched: true, DataFilters: [], Format: Parquet, PartitionFilters: [isnotnull(store_id#4001), dynamicpruningexpression(store_id#4001 IN dynamicpruning#4007)], PushedFilters: [], ReadSchema: struct<date_id:int,product_id:int> : +- SubqueryBroadcast dynamicpruning#4007, 0, [new_store_id#3997], [id=#263] : +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, true] as bigint)),false), [id=#262] : +- *(1) Project [(store_id#4002 + 3) AS new_store_id#3997] : +- *(1) Filter ((isnotnull(country#4004) AND (country#4004 = US)) AND isnotnull((store_id#4002 + 3))) : +- *(1) ColumnarToRow : +- FileScan parquet default.dim_store[store_id#4002,country#4004] Batched: true, DataFilters: [isnotnull(country#4004), (country#4004 = US), isnotnull((store_id#4002 + 3))], Format: Parquet, PartitionFilters: [], PushedFilters: [IsNotNull(country), EqualTo(country,US)], ReadSchema: struct<store_id:int,country:string> +- ReusedExchange [new_store_id#3997], BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, true] as bigint)),false), [id=#262] ``` This is because `OptimizeSubqueries` will infer more filters, so we cannot reuse broadcasts. The following is the plan if disable `spark.sql.optimizer.dynamicPartitionPruning.reuseBroadcastOnly`: ``` == Physical Plan == *(2) Project [date_id#3998, product_id#3999] +- *(2) BroadcastHashJoin [store_id#4001], [new_store_id#3997], Inner, BuildRight, false :- *(2) ColumnarToRow : +- FileScan parquet default.fact_sk[date_id#3998,product_id#3999,store_id#4001] Batched: true, DataFilters: [], Format: Parquet, PartitionFilters: [isnotnull(store_id#4001), dynamicpruningexpression(store_id#4001 IN subquery#4009)], PushedFilters: [], ReadSchema: struct<date_id:int,product_id:int> : +- Subquery subquery#4009, [id=#284] : +- *(2) HashAggregate(keys=[new_store_id#3997#4008], functions=[]) : +- Exchange hashpartitioning(new_store_id#3997#4008, 5), ENSURE_REQUIREMENTS, [id=#280] : +- *(1) HashAggregate(keys=[new_store_id#3997 AS new_store_id#3997#4008], functions=[]) : +- *(1) Project [(store_id#4002 + 3) AS new_store_id#3997] : +- *(1) Filter (((isnotnull(store_id#4002) AND isnotnull(country#4004)) AND (country#4004 = US)) AND isnotnull((store_id#4002 + 3))) : +- *(1) ColumnarToRow : +- FileScan parquet default.dim_store[store_id#4002,country#4004] Batched: true, DataFilters: [isnotnull(store_id#4002), isnotnull(country#4004), (country#4004 = US), isnotnull((store_id#4002..., Format: Parquet, PartitionFilters: [], PushedFilters: [IsNotNull(store_id), IsNotNull(country), EqualTo(country,US)], ReadSchema: struct<store_id:int,country:string> +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, true] as bigint)),false), [id=#305] +- *(1) Project [(store_id#4002 + 3) AS new_store_id#3997] +- *(1) Filter ((isnotnull(country#4004) AND (country#4004 = US)) AND isnotnull((store_id#4002 + 3))) +- *(1) ColumnarToRow +- FileScan parquet default.dim_store[store_id#4002,country#4004] Batched: true, DataFilters: [isnotnull(country#4004), (country#4004 = US), isnotnull((store_id#4002 + 3))], Format: Parquet, PartitionFilters: [], PushedFilters: [IsNotNull(country), EqualTo(country,US)], ReadSchema: struct<store_id:int,country:string> ``` ### Why are the changes needed? Improve DPP to support more cases. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Unit test and benchmark test: SQL | Before this PR(Seconds) | After this PR(Seconds) -- | -- | -- TPC-DS q58 | 40 | 20 TPC-DS q83 | 18 | 14 Closes #33664 from wangyum/SPARK-36444. Authored-by: Yuming Wang <yumwang@ebay.com> Signed-off-by: Yuming Wang <yumwang@ebay.com> (cherry picked from commit 2310b99) Signed-off-by: Yuming Wang <yumwang@ebay.com>
1 parent 9544c24 commit 5b97165

File tree

10 files changed

+1235
-1156
lines changed

10 files changed

+1235
-1156
lines changed

sql/core/src/main/scala/org/apache/spark/sql/execution/SparkOptimizer.scala

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,8 +42,7 @@ class SparkOptimizer(
4242
override def defaultBatches: Seq[Batch] = (preOptimizationBatches ++ super.defaultBatches :+
4343
Batch("Optimize Metadata Only Query", Once, OptimizeMetadataOnlyQuery(catalog)) :+
4444
Batch("PartitionPruning", Once,
45-
PartitionPruning,
46-
OptimizeSubqueries) :+
45+
PartitionPruning) :+
4746
Batch("Pushdown Filters from PartitionPruning", fixedPoint,
4847
PushDownPredicates) :+
4948
Batch("Cleanup filters that cannot be pushed down", Once,

sql/core/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q58.sf100/explain.txt

Lines changed: 289 additions & 278 deletions
Large diffs are not rendered by default.

sql/core/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q58.sf100/simplified.txt

Lines changed: 24 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -18,30 +18,32 @@ TakeOrderedAndProject [item_id,ss_item_rev,ss_dev,cs_item_rev,cs_dev,ws_item_rev
1818
ColumnarToRow
1919
InputAdapter
2020
Scan parquet default.store_sales [ss_item_sk,ss_ext_sales_price,ss_sold_date_sk]
21-
InputAdapter
22-
BroadcastExchange #2
23-
WholeStageCodegen (2)
24-
Project [d_date_sk]
25-
BroadcastHashJoin [d_date,d_date]
26-
Filter [d_date_sk]
27-
ColumnarToRow
28-
InputAdapter
29-
Scan parquet default.date_dim [d_date_sk,d_date]
30-
InputAdapter
31-
BroadcastExchange #3
32-
WholeStageCodegen (1)
33-
Project [d_date]
34-
Filter [d_week_seq]
35-
Subquery #1
21+
SubqueryBroadcast [d_date_sk] #1
22+
BroadcastExchange #2
23+
WholeStageCodegen (2)
24+
Project [d_date_sk]
25+
BroadcastHashJoin [d_date,d_date]
26+
Filter [d_date_sk]
27+
ColumnarToRow
28+
InputAdapter
29+
Scan parquet default.date_dim [d_date_sk,d_date]
30+
InputAdapter
31+
BroadcastExchange #3
3632
WholeStageCodegen (1)
37-
Project [d_week_seq]
38-
Filter [d_date]
33+
Project [d_date]
34+
Filter [d_week_seq]
35+
Subquery #2
36+
WholeStageCodegen (1)
37+
Project [d_week_seq]
38+
Filter [d_date]
39+
ColumnarToRow
40+
InputAdapter
41+
Scan parquet default.date_dim [d_date,d_week_seq]
3942
ColumnarToRow
4043
InputAdapter
4144
Scan parquet default.date_dim [d_date,d_week_seq]
42-
ColumnarToRow
43-
InputAdapter
44-
Scan parquet default.date_dim [d_date,d_week_seq]
45+
InputAdapter
46+
ReusedExchange [d_date_sk] #2
4547
InputAdapter
4648
BroadcastExchange #4
4749
WholeStageCodegen (3)
@@ -66,6 +68,7 @@ TakeOrderedAndProject [item_id,ss_item_rev,ss_dev,cs_item_rev,cs_dev,ws_item_rev
6668
ColumnarToRow
6769
InputAdapter
6870
Scan parquet default.catalog_sales [cs_item_sk,cs_ext_sales_price,cs_sold_date_sk]
71+
ReusedSubquery [d_date_sk] #1
6972
InputAdapter
7073
ReusedExchange [d_date_sk] #2
7174
InputAdapter
@@ -87,6 +90,7 @@ TakeOrderedAndProject [item_id,ss_item_rev,ss_dev,cs_item_rev,cs_dev,ws_item_rev
8790
ColumnarToRow
8891
InputAdapter
8992
Scan parquet default.web_sales [ws_item_sk,ws_ext_sales_price,ws_sold_date_sk]
93+
ReusedSubquery [d_date_sk] #1
9094
InputAdapter
9195
ReusedExchange [d_date_sk] #2
9296
InputAdapter

0 commit comments

Comments
 (0)