Skip to content

Conversation

@aokolnychyi
Copy link
Contributor

@aokolnychyi aokolnychyi commented Jul 20, 2017

What changes were proposed in this pull request?

This PR adds an optimization rule that infers join conditions using propagated constraints.

For instance, if there is a join, where the left relation has 'a = 1' and the right relation has 'b = 1', then the rule infers 'a = b' as a join predicate. Only semantically new predicates are appended to the existing join condition.

Refer to the corresponding ticket and tests for more details.

How was this patch tested?

This patch comes with a new test suite to cover the implemented logic.

@SparkQA
Copy link

SparkQA commented Jul 20, 2017

Test build #79804 has started for PR 18692 at commit e67d4d3.

@cloud-fan
Copy link
Contributor

I think we already did it via constraint propagation, didn't we?

@aokolnychyi
Copy link
Contributor Author

@cloud-fan which rule do you mean? PushPredicateThroughJoin seems to be the closest by logic but it has a slightly different purpose and does not cover this use case. In fact, I used the proposed rule in conjunction with PushPredicateThroughJoin in the tests.

@gatorsmile
Copy link
Member

We need to use the propagated constraints to infer the join conditions.

@aokolnychyi
Copy link
Contributor Author

@gatorsmile thanks for the input. Let me check that I understood everything correctly. So, I keep it as a separate rule that is applied only if constraint propagation enabled. Inside the rule, I rely on join.constraints to infer the join conditions. The remaining logic stays the same. Correct?

I guess that InferFiltersFromConstraints can be used as a guideline.

@gatorsmile
Copy link
Member

You are on the right track. You might find some bugs/issues when you implement it. Sorry, too busy recently.

@SparkQA
Copy link

SparkQA commented Jul 30, 2017

Test build #80056 has finished for PR 18692 at commit 915dc7e.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

InnerLike?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also thought about this but decided to start with a smaller scope. The motivation was that "SELECT * FROM t1, t2" is resolved into an Inner Join and one has to explicitly use the Cross Join syntax to allow cartesian products. I was not sure if it was OK to replace an explicit Cross Join with a join of a different type. Semantically, we can have InnerLike here.

@gatorsmile
Copy link
Member

gatorsmile commented Jul 30, 2017

You also need to resolve another case:

        Seq((1, 2)).toDF("col1", "col2").write.saveAsTable("t1")
        Seq(1, 2).toDF("col").write.saveAsTable("t2")
        sql("SELECT * FROM t1, t2 WHERE t1.col1 = t1.col2 AND t1.col1 = 1 AND t2.col = 1")

This new rule can infer the unneeded join conditions, col2 = col1 AND col2 = col AND col1 = col. Let us remove useless local predicates.

@SparkQA
Copy link

SparkQA commented Jul 30, 2017

Test build #80058 has finished for PR 18692 at commit 851f388.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@gatorsmile
Copy link
Member

BTW, your PR title and descriptions are out of dated.

@aokolnychyi
Copy link
Contributor Author

@gatorsmile I took a look at the case above. Indeed, the proposed rule triggers this issue but only indirectly. In the example above, the optimizer will never reach a fixed point. Please, find my investigation below.

... 

// The new rule infers correct join predicates
Join Inner, ((col2#33 = col#34) && (col1#32 = col#34))
:- Filter ((col1#32 = col2#33) && (col1#32 = 1))
:  +- Relation[col1#32,col2#33] parquet
+- Filter (col#34 = 1)
   +- Relation[col#34] parquet

// InferFiltersFromConstraints adds more filters
Join Inner, ((col2#33 = col#34) && (col1#32 = col#34))
:- Filter ((((col2#33 = 1) && isnotnull(col1#32)) && isnotnull(col2#33)) && ((col1#32 = col2#33) && (col1#32 = 1)))
:  +- Relation[col1#32,col2#33] parquet
+- Filter (isnotnull(col#34) && (col#34 = 1))
   +- Relation[col#34] parquet

// ConstantPropagation is applied
Join Inner, ((col2#33 = col#34) && (col1#32 = col#34))
!:- Filter (((((col2#33 = 1) && isnotnull(col2#33)) && isnotnull(col1#32)) && ((1 = col2#33) && (col1#32 = 1))) 
 :  +- Relation[col1#32,col2#33] parquet
 +- Filter (isnotnull(col#34) && (col#34 = 1))
    +- Relation[col#34] parquet                          

// (Important) InferFiltersFromConstraints infers (col1#32 = col2#33), which is added to the join condition.
Join Inner, ((col1#32 = col2#33) && ((col2#33 = col#34) && (col1#32 = col#34)))
!:- Filter (((((col2#33 = 1) && isnotnull(col2#33)) && isnotnull(col1#32)) && ((1 = col2#33) && (col1#32 = 1))) 
 :  +- Relation[col1#32,col2#33] parquet
 +- Filter (isnotnull(col#34) && (col#34 = 1))
    +- Relation[col#34] parquet

 // PushPredicateThroughJoin pushes down (col1#32 = col2#33) and then CombineFilters produces
Join Inner, ((col1#32 = col#34) && (col2#33 = col#34))
!:- Filter ((((isnotnull(col1#32) && (col2#33 = 1)) && isnotnull(col2#33)) && ((1 = col2#33) && (col1#32 = 1))) && (col2#33 = col1#32))
 :  +- Relation[col1#32,col2#33] parquet
 +- Filter (isnotnull(col#34) && (col#34 = 1))
    +- Relation[col#34] parquet                                                                      

After that, ConstantPropagation replaces (col2#33 = col1#32) as (1 = 1), BooleanSimplification removes (1 = 1), InferFiltersFromConstraints infers (col2#33 = col1#32) again and the procedure repeats forever. Since InferFiltersFromConstraints is the last optimization rule, we have the redundant condition mentioned by you. The Optimizer without the new rule will also not converge on the following query:

Seq((1, 2)).toDF("col1", "col2").write.saveAsTable("t1")
Seq(1, 2).toDF("col").write.saveAsTable("t2")
spark.sql("SELECT * FROM t1, t2 WHERE t1.col1 = 1 AND 1 = t1.col2 AND t1.col1 = t2.col AND t1.col2 = t2.col").explain(true)

Correct me if I am wrong, but it seems like an issue with the existing rules.

@aokolnychyi aokolnychyi changed the title [SPARK-21417][SQL] Detect joind conditions via filter expressions [SPARK-21417][SQL] Infer join conditions using propagated constraints Aug 1, 2017
@SparkQA
Copy link

SparkQA commented Aug 7, 2017

Test build #80362 has finished for PR 18692 at commit 281f955.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@aokolnychyi
Copy link
Contributor Author

@gatorsmile I updated the rule to cover cross join cases. Regarding the case with the redundant condition mentioned by you, I opened SPARK-21652. It is an existing issue and is not caused by the proposed rule. BTW, I can try to fix it once we agree on a solution.

@gatorsmile
Copy link
Member

@aokolnychyi Thanks for finding the non-convergent case! Let me see how to fix it.

@aokolnychyi
Copy link
Contributor Author

@gatorsmile what is our decision here? Shall we wait until SPARK-21652 is resolved? In the meantime, I can add some tests and see how the proposed rule works together with all others.

@gatorsmile
Copy link
Member

Sorry for the delay. @jiangxb1987 will submit a simple fix for the issue you mentioned. That will not be a perfect fix but it partially resolve the issue. In the future, we need to move the filter removal to a separate batch for cost-based optimization instead of doing it with filter inference in the same RBO batch.

@tejasapatil
Copy link
Contributor

Can we restrict this to cartesian product ONLY ? One clear downside of doing this for other joins is that it will potentially add shuffle in case of (bucketing queries) and (subqueries in general). After adding the inferred join conditions, it might lead to the child node's partitioning NOT satisfying the JOIN node's requirements which otherwise could have.

@gatorsmile
Copy link
Member

gatorsmile commented Sep 1, 2017

In this PR, we should limit it to cartesian product now. In the future, we need to perform smarter when extracting/deciding equi-join keys.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also mention the reason why we are restricting this to cross joins only ?

For other join types, adding inferred join conditions would potentially shuffle children as child node's partitioning won't satisfying the JOIN node's requirements which otherwise could have.

@SparkQA
Copy link

SparkQA commented Sep 4, 2017

Test build #81390 has finished for PR 18692 at commit cfeae46.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the discussion we did above, we only enable this rule for cartesian products. That means, the above codes should be like

    case join @ Join(left, right, Cross, None) =>
      val leftConstraints = join.constraints.filter(_.references.subsetOf(left.outputSet))
      val rightConstraints = join.constraints.filter(_.references.subsetOf(right.outputSet))
      val inferredJoinPredicates = inferJoinPredicates(leftConstraints, rightConstraints)
      val newConditionOpt = inferredJoinPredicates.reduceOption(And)
      if (newConditionOpt.isDefined) Join(left, right, Inner, newConditionOpt) else join

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And what about CROSS joins with join conditions? Not sure if they will benefit from the proposed rule, but it is better to ask.

Seq((1, 2)).toDF("col1", "col2").write.saveAsTable("t1")
Seq((1, 2)).toDF("col1", "col2").write.saveAsTable("t2")
val df = spark.sql("SELECT * FROM t1 CROSS JOIN t2 ON t1.col1 >= t2.col1 WHERE t1.col1 = 1 AND t2.col1 = 1")
df.explain(true)
== Optimized Logical Plan ==
Join Cross, (col1#40 >= col1#42)
:- Filter (isnotnull(col1#40) && (col1#40 = 1))
:  +- Relation[col1#40,col2#41] parquet
+- Filter (isnotnull(col1#42) && (col1#42 = 1))
   +- Relation[col1#42,col2#43] parquet

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see the rule CheckCartesianProducts . The example above is not a CROSS join.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gatorsmile Thanks for getting back.

CheckCartesianProducts identifies a join of type Inner | LeftOuter | RightOuter | FullOuter as a cartesian product if there is no join predicate that has references to both relations.

If we agree to ignore joins of type Cross that have a condition (in this PR), then the use case in this discussion is no longer possible (even if you remove t1.col1 >= t2.col1). Correct? PushPredicateThroughJoin will push t1.col1 = t1.col2 + t2.col2 and t2.col1 = t1.col2 + t2.col2 into the join condition and the proposed rule will not infer anything and the
final join will be of type Cross with a condition that covers both relations. According to the logic of CheckCartesianProducts, it is not considered to be a cartesian product (since there exists a join predicate that covers both relations, e.g. t1.col1 = t1.col2 + t2.col2).

So, if I have a confirmation that we need to consider only joins of type Cross and without any join conditions, I can update the PR accordingly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. In this PR, we just need to consider cross join without any join condition.

In the future, we can extend it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function can be removed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of adding a new rule, we should improve the rule InferFiltersFromConstraints.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also thought about this but InferFiltersFromConstraints does not change considered join types. Therefore, I kept them separated. In addition, I thought about renaming it to EliminateCrossJoin.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Since we decide to focus on cross join only, we should rename it to EliminateCrossJoin , like what you proposed.

@cloud-fan
Copy link
Contributor

After adding the inferred join conditions, it might lead to the child node's partitioning NOT satisfying the JOIN node's requirements which otherwise could have.

Isn't it an existing problem? the current constraint propagation framework infers as many predicates as possible, so we may already hit this problem. I think we should revisit the constraint propagation framework to think about how to avoid adding more shuffles, instead of stopping improving this framework to infer more predicates.

@tejasapatil
Copy link
Contributor

@cloud-fan : In event when the (set of join keys) is a superset of (child node's partitioning keys), its possible to avoid shuffle : #19054 ... this can help with 2 cases - when users unknowingly join over extra columns in addition to bucket columns

  • the one you mentioned (ie. inferred conditions).

@gatorsmile
Copy link
Member

cc @gengliangwang Review this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to separate the constraints as left only and right only.
The following case can infer t1.col1 = t2.col1:

Seq((1, 2)).toDF("col1", "col2").write.saveAsTable("t1")
Seq((1, 2)).toDF("col1", "col2").write.saveAsTable("t2")
val df = spark.sql("SELECT * FROM t1 CROSS JOIN t2 ON t1.col1 >= t2.col1 " +
   "WHERE t1.col1 = t1.col2 + t2.col2 and t2.col1 = t1.col2 + t2.col2")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gengliangwang Yeah, makes sense. So, PushPredicateThroughJoin would push the where clause into the join and the proposed rule will infer t1.col1 = t2.col1 and change the join type to INNER. As a result, the final join condition will be t1.col1 = t2.col1 and t1.col1 >= t2.col1 and (t1.col1 = t1.col2 + t2.col2 and t2.col1 = t1.col2 + t2.col2). Am I right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are right.

@SparkQA
Copy link

SparkQA commented Nov 24, 2017

Test build #84176 has finished for PR 18692 at commit ab7d966.

  • This patch fails Spark unit tests.
  • This patch does not merge cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Nov 24, 2017

Test build #84177 has finished for PR 18692 at commit 3e090f9.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

* node's requirements which otherwise could have.
*
* For instance, if there is a CROSS join, where the left relation has 'a = 1' and the right
* relation has 'b = 1', the rule infers 'a = b' as a join predicate.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For instance, given a CROSS join with the constraint 'a = 1' from the left child and the constraint 'b = 1' from the right child, this rule infers a new join predicate 'a = b' and convert it to an Inner join.

}

private def eliminateCrossJoin(plan: LogicalPlan): LogicalPlan = plan transform {
case join@Join(leftPlan, rightPlan, Cross, None) =>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: join@Join -> join @ Join

}

// the purpose of this class is to treat 'a === 1 and 1 === 'a as the same expressions
implicit class SemanticExpression(private val expr: Expression) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we reuse EquivalentExpressions? You can search the code base and see how the others use it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gatorsmile

I think we just need the case class inside EquivalentExpressions since we have to map all semantically equivalent expressions into a set of attributes (as opposed to mapping an expression into a set of equivalent expressions).

I see two ways to go:

  1. Expose the case class inside EquivalentExpressions with minimum changes in the code base (e.g., using a companion object):
object EquivalentExpressions {

  /**
   * Wrapper around an Expression that provides semantic equality.
   */
  implicit class SemanticExpr(private val e: Expression) {
    override def equals(o: Any): Boolean = o match {
      case other: SemanticExpr => e.semanticEquals(other.e)
      case _ => false
    }

    override def hashCode: Int = e.semanticHash()
  }
}
  1. Keep EquivalentExpressions as it is and maintain a separate map from expressions to attributes in the proposed rule.

Personally, I lean toward the first idea since it might be useful to have SemanticExpr alone. However, there can be other drawbacks that did not come into my mind.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about building a new class to process all the cases similar to this one?

An Attribute is also an Expression. Basically, the internal will be still a hash map mutable.HashMap.empty[SemanticEqualExpr, mutable.MutableList[Expression]]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a Set instead of a List might be beneficial in the proposed rule. What about the following?

class EquivalentExpressionMap {

  private val equivalenceMap = mutable.HashMap.empty[SemanticallyEqualExpr, mutable.Set[Expression]]

  def put(expression: Expression, equivalentExpression: Expression): Unit = {
    val equivalentExpressions = equivalenceMap.getOrElse(expression, mutable.Set.empty)
    if (!equivalentExpressions.contains(equivalentExpression)) {
      equivalenceMap(expression) = equivalentExpressions += equivalentExpression
    }
  }
  
  // produce an immutable copy to avoid any modifications from outside
  def get(expression: Expression): Set[Expression] =
    equivalenceMap.get(expression).fold(Set.empty[Expression])(_.toSet)

}

object EquivalentExpressionMap {

  private implicit class SemanticallyEqualExpr(private val expr: Expression) {
    override def equals(o: Any): Boolean = o match {
      case other: SemanticallyEqualExpr => expr.semanticEquals(other.expr)
      case _ => false
    }

    override def hashCode: Int = expr.semanticHash()
  }
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not check it carefully, but how about ExpressionSet?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am afraid ExpressionSet will not help here since we need to map a semantically equivalent expression into a set of attributes that correspond to it. It is not enough to check if there is an equivalent expression. Therefore, EquivalentExpressions and ExpressionSet are not applicable (as far as I see).

EquivalentExpressionMap from the previous comment assumes the following workflow:

val equivalentExressionMap = new EquivalentExpressionMap
...
equivalentExressionMap.put(1 * 2, t1.a)
equivalentExressionMap.put(3, t1.b)
...
equivalentExressionMap.get(1 * 2) // Set(t1.a)
equivalentExressionMap.get(2 * 1) // Set(t1.a)

Copy link
Member

@gatorsmile gatorsmile Nov 28, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean using ExpressionSet in EquivalentExpressionMap

private val equivalenceMap = mutable.HashMap.empty[SemanticallyEqualExpr, ExpressionSet]

def get(expression: Expression): Set[Expression]

def put(expression: Expression, equivalentExpression: Expression): Unit

@gatorsmile
Copy link
Member

Also add a test case for non-deterministic cases. For example, given the left child has a = rand() and the right child has b = rand(), we should not get a = b

set
}

val empty: ExpressionSet = ExpressionSet(Nil)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought that writing ExpressionSet.empty would be more readable than ExpressionSet(Nil). Usually, mutable collections have def empty() and immutable ones have separate objects that represent empty collections (e.g., Nil, Stream.Empty). I defined val empty since ExpressionSet is immutable.

@SparkQA
Copy link

SparkQA commented Nov 30, 2017

Test build #84349 has finished for PR 18692 at commit 0e5a9f2.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds the following public classes (experimental):
  • class EquivalentExpressionMap

@SparkQA
Copy link

SparkQA commented Nov 30, 2017

Test build #84351 has finished for PR 18692 at commit 9ab91a1.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@gatorsmile
Copy link
Member

LGTM

Thanks for your patience! It looks much good now. Really appreciate for your contributions! Welcome to make more contributions!

Thanks! Merged to master.

@asfgit asfgit closed this in 6ac57fd Nov 30, 2017
/**
* A rule that eliminates CROSS joins by inferring join conditions from propagated constraints.
*
* The optimization is applicable only to CROSS joins. For other join types, adding inferred join
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we apply this optimization to all joins after #19054?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It sounds promising.

@gatorsmile
Copy link
Member

@aokolnychyi After rethinking about it, we might need to revert this PR. Although it converts a CROSS Join to an Inner join, it does not improve the performance. What do you think?

@cloud-fan
Copy link
Contributor

Yea I have the same feeling. If the left side has a a = 1 constraint, and the right side has a b = 1 constraint, adding a a = b join condition does not help as it always evaluate to true.

@aokolnychyi
Copy link
Contributor Author

Sure, if you guys think it does not give any performance benefits, then let's revert it.

I also had similar concerns but my understanding was that having an inner join with some equality condition can be beneficial during the generation of a physical plan. In other words, Spark should be able to select a more efficient join implementation. I am not sure how it is right now but previously you could have only BroadcastNestedLoopJoin or CartesianProduct without any equality condition. Again, that was my assumption based on what I remember.

@aokolnychyi
Copy link
Contributor Author

I took a look at JoinSelection. It seems we will not get BroadcastHashJoin or ShuffledHashJoin if we revert this rule.

@gatorsmile
Copy link
Member

Even if we use BroadcastHashJoin or ShuffledHashJoin, it does not help because the identical values on keys just cause the unnecessary work in both, right?

@aokolnychyi
Copy link
Contributor Author

Yeah, correct. So, we should revert then.

@gatorsmile
Copy link
Member

Will do it. Thanks!

@gatorsmile
Copy link
Member

Done. Reverted.

@dongjoon-hyun
Copy link
Member

Hi, All.
Since the commit is reverted from the master branch, can we update the status of JIRA issue?

@gatorsmile
Copy link
Member

@aokolnychyi Could you rethink about it by using some cases like a in (0, 2, 3, 4) and b in (0, 2, 3, 4)? and then refer to a = b?

@aokolnychyi
Copy link
Contributor Author

I am not sure we can infer a == b if a in (0, 2, 3, 4) and b in (0, 2, 3, 4).

table 'a'

a1 a2
1  2
3  3
4  5

table 'b'

b1 b2
1  -1
2  -2
3  -4
SELECT * FROM a, b WHERE a1 in (1, 2) AND b1 in (1, 2)
// 1 2 1 -1
// 1 2 2 -2
SELECT * FROM a JOIN b ON a.a1 = b.b1 WHERE a1 in (1, 2) AND b1 in (1, 2)
// 1 2 1 -1

Do I miss anything?

@cloud-fan
Copy link
Contributor

you are right, then I don't know if there is any valid use case for inferring join condition from literals...

@gatorsmile
Copy link
Member

Yeah. That is a wrong case. Let us revisit it if we can find any useful case here. Thank you!

@Aklakan
Copy link

Aklakan commented Jan 22, 2018

Hi, its unfortunate to see this PR having gotten reverted

@gatorsmile

After rethinking about it, we might need to revert this PR. Although it converts a CROSS Join to an Inner join, it does not improve the performance.

The point of this issue is not performance improvement, but that some (in our case automatically generated) queries do not work at all with SPARK, whereas there is no problem with these queries in PostgreSQL and MySQL.

@aokolnychyi
IN are not equals-constraints (even though they could be expanded: a IN (x, y, z) -> a == x OR a == y OR a == z), so your example does not apply to the original example for join inference from literals, which was about inferring a == b from a = someConst AND b = theSameConst.

@cloud-fan
Copy link
Contributor

The point of this issue is not performance improvement, but that some (in our case automatically generated) queries do not work at all with SPARK, whereas there is no problem with these queries in PostgreSQL and MySQL.

I'm surprised to hear this, did you turn on CROSS JOIN via spark.sql.crossJoin.enabled?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants