Skip to content

Conversation

@yaooqinn
Copy link
Member

@yaooqinn yaooqinn commented Aug 25, 2020

What changes were proposed in this pull request?

This PR let JDBC clients identify spark interval columns properly.

Why are the changes needed?

JDBC users can query interval values through thrift server, create views with interval columns, e.g.

CREATE global temp view view1 as select interval 1 day as i;

but when they want to get the details of the columns of view1, the will fail with Unrecognized type name: INTERVAL

Caused by: java.lang.IllegalArgumentException: Unrecognized type name: INTERVAL
	at org.apache.hadoop.hive.serde2.thrift.Type.getType(Type.java:170)
	at org.apache.spark.sql.hive.thriftserver.ThriftserverShimUtils$.toJavaSQLType(ThriftserverShimUtils.scala:53)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.$anonfun$addToRowSet$1(SparkGetColumnsOperation.scala:157)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
	at org.apache.spark.sql.types.StructType.foreach(StructType.scala:102)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.addToRowSet(SparkGetColumnsOperation.scala:149)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.$anonfun$runInternal$6(SparkGetColumnsOperation.scala:113)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.$anonfun$runInternal$6$adapted(SparkGetColumnsOperation.scala:112)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.$anonfun$runInternal$5(SparkGetColumnsOperation.scala:112)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.$anonfun$runInternal$5$adapted(SparkGetColumnsOperation.scala:111)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.runInternal(SparkGetColumnsOperation.scala:111)
	... 34 more

Does this PR introduce any user-facing change?

YES,

before

image

after

image

How was this patch tested?

new tests

@yaooqinn
Copy link
Member Author

cc @maropu @cloud-fan thanks a bunch~

@SparkQA
Copy link

SparkQA commented Aug 25, 2020

Test build #127884 has finished for PR 29539 at commit 44cf73b.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Aug 25, 2020

Test build #127885 has finished for PR 29539 at commit ae9d76a.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@yaooqinn yaooqinn changed the title [SPARK-32696][SQL]Get columns operation should handle interval column properly [SPARK-32696][SQL][test-hive1.2][test-hadoop2.7]Get columns operation should handle interval column properly Aug 25, 2020
@yaooqinn
Copy link
Member Author

retest this please

@SparkQA
Copy link

SparkQA commented Aug 25, 2020

Test build #127890 has finished for PR 29539 at commit ae9d76a.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan
Copy link
Contributor

also cc @bogdanghit @juliuszsompolski

@SparkQA
Copy link

SparkQA commented Aug 26, 2020

Test build #127907 has finished for PR 29539 at commit bd3e692.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

}
}

test("get columns operation should handle interval column properly") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Can we also test an extract query of the table with calendar types?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR fixes SparkGetColumnsOperation. As a meta operation, it is not related to extract-queries that executed by SparkExecuteStatementOperation. BTW, I notice that test cases for extracting interval values in VIEWS through the thrift server might be missing in the current codebase. Maybe we can make another PR to improve these, WDYT, @cloud-fan Or I can just add some UTs here https://github.com/yaooqinn/spark/blob/d24d27f1bc39e915df23d65f8fda0d83e716b308/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2Suites.scala#L674

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's why I think we should add a small check here to make sure they actually work.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm OK to add the missing test in this PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, thank you guys for the check~

Copy link
Contributor

@bogdanghit bogdanghit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, pending extending / adding the test.

@SparkQA
Copy link

SparkQA commented Aug 27, 2020

Test build #127944 has finished for PR 29539 at commit d3888d4.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan
Copy link
Contributor

thanks, merging to master!

@cloud-fan cloud-fan closed this in f14f374 Aug 27, 2020
withJdbcStatement(viewName1, viewName2) { statement =>
statement.executeQuery(ddl1)
statement.executeQuery(ddl2)
val rs = statement.executeQuery(s"SELECT v1.i as a, v2.i as b FROM global_temp.$viewName1" +
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will be returned by rs.getMetadata.getColumnType for the interval column?
I believe that will be a string, because of how we just make it a string in the output schema in https://github.com/apache/spark/blob/master/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperation.scala#L362

There was a discussion last year in #25694 (comment), that because the interval is just returned as string in the query output schema, and not compliant to standard SQL interval types, it should not be listed by SparkGetTypeInfoOperation, and be treated just as strings.

If we want to make it be returned as a java.sql.Types.OTHER, to make this complete, it should be returned as such in result set metadata as well, and listed in SparkGetTypeInfoOperation.
Or we could map it to string.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the ResultSetMetadata describes the return types of the result set, the GetColumnOperation retrieves the metadata of the original table or view. IMHO, they do not have to be the same.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will make a followup to add it to SparkGetTypeInfoOperation

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be the most consistent if the table columns metadata, and metadata of the result of select * from table had consistent types. I don't really have a preference whether it should be string or interval. Last year the discussion leaned towards string, because of the intervals not really being standard compliant, but indeed there might be value in having it marked as interval in the schema, so the application might try to parse it as such... Though, e.g. between Spark 2.4 and Spark 3.0 the format of the returned interval changed, see https://issues.apache.org/jira/browse/SPARK-32132, which might have broken how anyone would have parsed it, so Spark is not really giving any guarantees as to the stability of the format of this type...

Ultimately, I think this is very rarely used, as intervals didn't even work at all until #25277 and it was not raised earlier...

Copy link
Member Author

@yaooqinn yaooqinn Aug 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ResultSetMetadata is used by the client-side user to do the right work on the result set what they get,when the interval values become strings, they need to follow the rules of String operations in their future activities.

The GetColumnOperation tells the client-side users about how the data is stored and formed at the server-side, then if the JDBC-users know column A is an interval type, then they should notice that they can not perform string functions, comparators, etc on column A.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In ResultSetMetadata would the client-side user not learn that they should follow string rules by using getColumnClassName, which should return String, while it would be still valuable to them to know that that column represents an interval?

assert(rowSet.getString("TABLE_NAME") === viewName)
assert(rowSet.getString("COLUMN_NAME") === "i")
assert(rowSet.getInt("DATA_TYPE") === java.sql.Types.OTHER)
assert(rowSet.getString("TYPE_NAME").equalsIgnoreCase(CalendarIntervalType.sql))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we treat it as string type, can we still report the TYPE_NAME as INTERVAL?

Copy link
Member Author

@yaooqinn yaooqinn Aug 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess that all the meta operations should do nothing else but specifically describe how all the objects(database/table /columns etc) stored in the Spark system.

image

If we change the column meta here from INTERVAL to STRING, so the column here becomes orderable and comparable?(JDBC users may guess), they may have no idea about what they are dealing with and get a completely different awareness of the meta-information comparing to those users who use spark-sql self-contained applications.

MaxGekk pushed a commit that referenced this pull request Apr 28, 2021
…column properly

### What changes were proposed in this pull request?
This PR let JDBC clients identify ANSI interval columns properly.

### Why are the changes needed?
This PR is similar to #29539.
JDBC users can query interval values through thrift server, create views with ansi interval columns, e.g.
`CREATE global temp view view1 as select interval '1-1' year to month as I;`
but when they want to get the details of the columns of view1, the will fail with `Unrecognized type name: YEAR-MONTH INTERVAL`
```
Caused by: java.lang.IllegalArgumentException: Unrecognized type name: YEAR-MONTH INTERVAL
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.toJavaSQLType(SparkGetColumnsOperation.scala:190)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.$anonfun$addToRowSet$1(SparkGetColumnsOperation.scala:206)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.addToRowSet(SparkGetColumnsOperation.scala:198)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.$anonfun$runInternal$7(SparkGetColumnsOperation.scala:109)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.$anonfun$runInternal$7$adapted(SparkGetColumnsOperation.scala:109)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.$anonfun$runInternal$5(SparkGetColumnsOperation.scala:109)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.$anonfun$runInternal$5$adapted(SparkGetColumnsOperation.scala:107)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation.runInternal(SparkGetColumnsOperation.scala:107)
	... 34 more
```

### Does this PR introduce _any_ user-facing change?
Yes. Let hive JDBC recognize ANSI interval.

### How was this patch tested?
Jenkins test.

Closes #32345 from beliefer/SPARK-35085.

Lead-authored-by: gengjiaan <[email protected]>
Co-authored-by: beliefer <[email protected]>
Signed-off-by: Max Gekk <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants