Skip to content

Conversation

@lidinghao
Copy link
Contributor

@lidinghao lidinghao commented Aug 8, 2019

What changes were proposed in this pull request?

Create Hive Partitioned Table without specifying data type for partition column will success unexpectedly.

// create a hive table partition by b, but the data type of b isn't specified.
CREATE TABLE tbl(a int) PARTITIONED BY (b) STORED AS parquet

In https://issues.apache.org/jira/browse/SPARK-26435 , PARTITIONED BY clause are extended to support Hive CTAS as following:

// Before
(PARTITIONED BY '(' partitionColumns=colTypeList ')'

 // After
(PARTITIONED BY '(' partitionColumns=colTypeList ')'|
PARTITIONED BY partitionColumnNames=identifierList) |

Create Table Statement like above case will pass the syntax check, and recognized as (PARTITIONED BY partitionColumnNames=identifierList) 。

This PR will check this case in visitCreateHiveTable and throw a exception which contains explicit error message to user.

How was this patch tested?

Added tests.

@lidinghao
Copy link
Contributor Author

cc @xuanyuanking

CreateTable(tableDescWithPartitionColNames, mode, Some(q))
}
case None => CreateTable(tableDesc, mode, None)
case None =>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the behavior of Hive for this scenario?

Copy link
Contributor Author

@lidinghao lidinghao Aug 9, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Spark 2.4, will throw exception like following and In Spark 3.0, will success

spark-sql> CREATE TABLE tbl(a int) PARTITIONED BY (b) STORED AS parquet;
Error in query:
extraneous input ')' expecting {'SELECT', 'FROM', 'ADD', 'AS', 'ALL', 'ANY', 'DISTINCT', 'WHERE', 'GROUP', 'BY', 'GROUPING', 'SETS', 'CUBE', 'ROLLUP', 'ORDER', 'HAVING', 'LIMIT', 'AT', 'OR', 'AND', 'IN', NOT, 'NO', 'EXISTS', 'BETWEEN', 'LIKE', RLIKE, 'IS', 'NULL', 'TRUE', 'FALSE', 'NULLS', 'ASC', 'DESC', 'FOR', 'INTERVAL', 'CASE', 'WHEN', 'THEN', 'ELSE', 'END', 'JOIN', 'CROSS', 'OUTER', 'INNER', 'LEFT', 'SEMI', 'RIGHT', 'FULL', 'NATURAL', 'ON', 'PIVOT', 'LATERAL', 'WINDOW', 'OVER', 'PARTITION', 'RANGE', 'ROWS', 'UNBOUNDED', 'PRECEDING', 'FOLLOWING', 'CURRENT', 'FIRST', 'AFTER', 'LAST', 'ROW', 'WITH', 'VALUES', 'CREATE', 'TABLE', 'DIRECTORY', 'VIEW', 'REPLACE', 'INSERT', 'DELETE', 'INTO', 'DESCRIBE', 'EXPLAIN', 'FORMAT', 'LOGICAL', 'CODEGEN', 'COST', 'CAST', 'SHOW', 'TABLES', 'COLUMNS', 'COLUMN', 'USE', 'PARTITIONS', 'FUNCTIONS', 'DROP', 'UNION', 'EXCEPT', 'MINUS', 'INTERSECT', 'TO', 'TABLESAMPLE', 'STRATIFY', 'ALTER', 'RENAME', 'ARRAY', 'MAP', 'STRUCT', 'COMMENT', 'SET', 'RESET', 'DATA', 'START', 'TRANSACTION', 'COMMIT', 'ROLLBACK', 'MACRO', 'IGNORE', 'BOTH', 'LEADING', 'TRAILING', 'IF', 'POSITION', 'EXTRACT', 'DIV', 'PERCENT', 'BUCKET', 'OUT', 'OF', 'SORT', 'CLUSTER', 'DISTRIBUTE', 'OVERWRITE', 'TRANSFORM', 'REDUCE', 'SERDE', 'SERDEPROPERTIES', 'RECORDREADER', 'RECORDWRITER', 'DELIMITED', 'FIELDS', 'TERMINATED', 'COLLECTION', 'ITEMS', 'KEYS', 'ESCAPED', 'LINES', 'SEPARATED', 'FUNCTION', 'EXTENDED', 'REFRESH', 'CLEAR', 'CACHE', 'UNCACHE', 'LAZY', 'FORMATTED', 'GLOBAL', TEMPORARY, 'OPTIONS', 'UNSET', 'TBLPROPERTIES', 'DBPROPERTIES', 'BUCKETS', 'SKEWED', 'STORED', 'DIRECTORIES', 'LOCATION', 'EXCHANGE', 'ARCHIVE', 'UNARCHIVE', 'FILEFORMAT', 'TOUCH', 'COMPACT', 'CONCATENATE', 'CHANGE', 'CASCADE', 'RESTRICT', 'CLUSTERED', 'SORTED', 'PURGE', 'INPUTFORMAT', 'OUTPUTFORMAT', DATABASE, DATABASES, 'DFS', 'TRUNCATE', 'ANALYZE', 'COMPUTE', 'LIST', 'STATISTICS', 'PARTITIONED', 'EXTERNAL', 'DEFINED', 'REVOKE', 'GRANT', 'LOCK', 'UNLOCK', 'MSCK', 'REPAIR', 'RECOVER', 'EXPORT', 'IMPORT', 'LOAD', 'ROLE', 'ROLES', 'COMPACTIONS', 'PRINCIPALS', 'TRANSACTIONS', 'INDEX', 'INDEXES', 'LOCKS', 'OPTION', 'ANTI', 'LOCAL', 'INPATH', IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 41)

== SQL ==
CREATE TABLE tbl(a int) PARTITIONED BY (b) STORED AS parquet
-----------------------------------------^^^

I will test the behavior of Hive later

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Hive 2.3.2, will throw exception like following:

> CREATE TABLE tbl(a int) PARTITIONED BY (b) STORED AS parquet;
Error: Error while compiling statement: FAILED: ParseException line 1:41 cannot recognize input near ')' 'STORED' 'AS' in column type (state=42000,code=40000)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your investigation, Hao!
I think here to throw the exception for partition column type missing is the right behavior.
The current behavior should be the regression bug involves from #23376, it droped the partition column without type:

spark-sql> CREATE TABLE tbl(a int) PARTITIONED BY (b) STORED AS parquet;
Time taken: 1.856 seconds
spark-sql> desc tbl;
a	int	NULL
Time taken: 0.46 seconds, Fetched 1 row(s)

Could you also test the behavior in Hive 3.0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, Yuanjian, thanks for the reasoning.
Agree with you, Spark 2.4 and previous version, will throw exception for partition column type missing, #23376 brought the current behavior, this PR intend to check this case and throw exception.

Don't have an hive 3 environment on hand , so I add a unit test case in Hive 3.1 branch and run it , exception will be thrown as hive 2

java.lang.RuntimeException: CREATE TABLE tbl(a int) PARTITIONED BY (b) STORED AS parquet; 
failed: (responseCode = 40000, errorMessage = FAILED: ParseException line 1:41 cannot recognize input near ')' 'STORED' 'AS' in column type, SQLState = 42000, exception = line 1:41 cannot recognize input near ')' 'STORED' 'AS' in column type)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for checking, then I'm sure it's a regression bug. Is it possible to fix in SqlBase.g4?
If that's hard to do, then I'm ok to add this logic in SparkSqlParser.

Copy link
Contributor Author

@lidinghao lidinghao Aug 12, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, Yuanjian, I have tried some solutions which fix this case in SqlBase.g4 before and after this PR submitted, but the result isn't good, here is the summary of all the solutions.

If we try to fix this in SqlBase.g4 , i.e. during syntax analysis phase, we have to change createHiveTable syntax description to make Antlr split the analysis path into two branch, one for CTAS which accept syntax that define partition column without data type and must have a AS query sub-clause at end of the DDL sentence. Another branch for CT which accept syntax that define partition column with data type and cannot have a AS query sub-clause at end of the DDL sentence.

When user use a illegal CT DDL with partition columns data type missed, the syntax analyzer will match this with first branch, and give a misleading error messages for user. And if user use CTAS DDL but has partition columns data type defined, the syntax analyzer will match this with second branch, also give misleading error messages.

If we fix this in SparkSqlParser, i.e. during semantic analysis phase, we can not only hand the illegal CTAS&CT DDL case as Antlr, but also give user a explicit and useful error message.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, thanks for trying! Then I'm ok to fix this behavior in SparkSqlParser.

@lidinghao lidinghao changed the title [SPARK-28662] [SQL] Create Hive Partitioned Table without specifying data type for partition columns will success [SPARK-28662] [SQL] Create Hive Partitioned Table without specifying data type for partition columns will success in Spark 3.0 Aug 9, 2019
@lidinghao lidinghao changed the title [SPARK-28662] [SQL] Create Hive Partitioned Table without specifying data type for partition columns will success in Spark 3.0 [SPARK-28662] [SQL] Create Hive Partitioned Table without specifying data type for partition columns will success unexpectedly in Spark 3.0 Aug 10, 2019
@lidinghao lidinghao changed the title [SPARK-28662] [SQL] Create Hive Partitioned Table without specifying data type for partition columns will success unexpectedly in Spark 3.0 [SPARK-28662] [SQL] Create Hive Partitioned Table DDL should fail when partition column type missed Aug 10, 2019
Copy link
Member

@xuanyuanking xuanyuanking left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, cc @cloud-fan for a double check.

case None =>
// When creating partitioned table, we must specify data type for the partition columns.
if (Option(ctx.partitionColumnNames).isDefined) {
val errorMessage = "Create Partitioned Table must specify data type for " +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: "Must specify a data type for each partition column while creating Hive partitioned table. "

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, already improved the error message in the latest commit

@cloud-fan
Copy link
Contributor

ok to test

@SparkQA
Copy link

SparkQA commented Aug 19, 2019

Test build #109352 has finished for PR 25390 at commit 935af87.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan cloud-fan closed this in 79464be Aug 20, 2019
@cloud-fan
Copy link
Contributor

thanks, merging to master!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants