-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-9066][SQL] Improve cartesian performance #7417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
0a62098
61d1a7e
23deb4b
eb9d155
8198648
1006d46
f0ce447
2bc0991
547242e
bca7a07
a168900
99bcde7
4310536
b2a0ae8
5ca1d26
04678d1
f1cebae
8a8658c
60f2102
e01c8f0
dd77444
d9aef91
a66f475
9812242
ce6ad25
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
- Loading branch information
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -27,24 +27,26 @@ import org.apache.spark.sql.execution.{BinaryNode, SparkPlan} | |
| * :: DeveloperApi :: | ||
| */ | ||
| @DeveloperApi | ||
| case class CartesianProduct(left: SparkPlan, right: SparkPlan) extends BinaryNode { | ||
| override def output: Seq[Attribute] = left.output ++ right.output | ||
| case class CartesianProduct( | ||
| left: SparkPlan, | ||
| right: SparkPlan, | ||
| buildSide: BuildSide) extends BinaryNode { | ||
|
|
||
| protected override def doExecute(): RDD[InternalRow] = { | ||
| val leftResults = left.execute().map(_.copy()) | ||
| val rightResults = right.execute().map(_.copy()) | ||
| private val (streamed, broadcast) = buildSide match { | ||
| case BuildRight => (left, right) | ||
| case BuildLeft => (right, left) | ||
| } | ||
|
|
||
| val cartesianRdd = if (leftResults.partitions.size > rightResults.partitions.size) { | ||
| rightResults.cartesian(leftResults).mapPartitions { iter => | ||
| iter.map(tuple => (tuple._2, tuple._1)) | ||
| } | ||
| } else { | ||
| leftResults.cartesian(rightResults) | ||
| } | ||
| override def output: Seq[Attribute] = left.output ++ right.output | ||
|
|
||
| cartesianRdd.mapPartitions { iter => | ||
| protected override def doExecute(): RDD[InternalRow] = { | ||
| val broadcastedRelation = sparkContext.broadcast(broadcast.execute().map(_.copy())) | ||
| broadcastedRelation.value.cartesian(streamed.execute().map(_.copy())).mapPartitions{ iter => | ||
| val joinedRow = new JoinedRow | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Quick question. Why not use
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes, use partition size here is not accurate, see a rdd with 100 partitions, and each partition has one record and a rdd with 10 partition and each partition has 100 million records, use the method above will cause more scan from hdfs
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @hvanhovell Yes, use sizeInBytes is better, but also have a problem, if leftResults only have 1 record and this record size are big, and rightResults have many records and these records total size are small, then at this scenario will cause worse performance. The best way is we check the total records for the partition, but now we can not get it. |
||
| iter.map(r => joinedRow(r._1, r._2)) | ||
| buildSide match { | ||
| case BuildRight => iter.map(r => joinedRow(r._1, r._2)) | ||
| case BuildLeft => iter.map(r => joinedRow(r._2, r._1)) | ||
| } | ||
| } | ||
| } | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In other places,
BuildRightmeans the right side is a small table, such asorg.apache.spark.sql.execution.joins.HashJoin, and we usually broadcast it. Could you follow this semantics?