Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -985,7 +985,15 @@ class SparkSqlAstBuilder(conf: SQLConf) extends AstBuilder(conf) {
} else {
CreateTable(tableDescWithPartitionColNames, mode, Some(q))
}
case None => CreateTable(tableDesc, mode, None)
case None =>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the behavior of Hive for this scenario?

Copy link
Contributor Author

@lidinghao lidinghao Aug 9, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Spark 2.4, will throw exception like following and In Spark 3.0, will success

spark-sql> CREATE TABLE tbl(a int) PARTITIONED BY (b) STORED AS parquet;
Error in query:
extraneous input ')' expecting {'SELECT', 'FROM', 'ADD', 'AS', 'ALL', 'ANY', 'DISTINCT', 'WHERE', 'GROUP', 'BY', 'GROUPING', 'SETS', 'CUBE', 'ROLLUP', 'ORDER', 'HAVING', 'LIMIT', 'AT', 'OR', 'AND', 'IN', NOT, 'NO', 'EXISTS', 'BETWEEN', 'LIKE', RLIKE, 'IS', 'NULL', 'TRUE', 'FALSE', 'NULLS', 'ASC', 'DESC', 'FOR', 'INTERVAL', 'CASE', 'WHEN', 'THEN', 'ELSE', 'END', 'JOIN', 'CROSS', 'OUTER', 'INNER', 'LEFT', 'SEMI', 'RIGHT', 'FULL', 'NATURAL', 'ON', 'PIVOT', 'LATERAL', 'WINDOW', 'OVER', 'PARTITION', 'RANGE', 'ROWS', 'UNBOUNDED', 'PRECEDING', 'FOLLOWING', 'CURRENT', 'FIRST', 'AFTER', 'LAST', 'ROW', 'WITH', 'VALUES', 'CREATE', 'TABLE', 'DIRECTORY', 'VIEW', 'REPLACE', 'INSERT', 'DELETE', 'INTO', 'DESCRIBE', 'EXPLAIN', 'FORMAT', 'LOGICAL', 'CODEGEN', 'COST', 'CAST', 'SHOW', 'TABLES', 'COLUMNS', 'COLUMN', 'USE', 'PARTITIONS', 'FUNCTIONS', 'DROP', 'UNION', 'EXCEPT', 'MINUS', 'INTERSECT', 'TO', 'TABLESAMPLE', 'STRATIFY', 'ALTER', 'RENAME', 'ARRAY', 'MAP', 'STRUCT', 'COMMENT', 'SET', 'RESET', 'DATA', 'START', 'TRANSACTION', 'COMMIT', 'ROLLBACK', 'MACRO', 'IGNORE', 'BOTH', 'LEADING', 'TRAILING', 'IF', 'POSITION', 'EXTRACT', 'DIV', 'PERCENT', 'BUCKET', 'OUT', 'OF', 'SORT', 'CLUSTER', 'DISTRIBUTE', 'OVERWRITE', 'TRANSFORM', 'REDUCE', 'SERDE', 'SERDEPROPERTIES', 'RECORDREADER', 'RECORDWRITER', 'DELIMITED', 'FIELDS', 'TERMINATED', 'COLLECTION', 'ITEMS', 'KEYS', 'ESCAPED', 'LINES', 'SEPARATED', 'FUNCTION', 'EXTENDED', 'REFRESH', 'CLEAR', 'CACHE', 'UNCACHE', 'LAZY', 'FORMATTED', 'GLOBAL', TEMPORARY, 'OPTIONS', 'UNSET', 'TBLPROPERTIES', 'DBPROPERTIES', 'BUCKETS', 'SKEWED', 'STORED', 'DIRECTORIES', 'LOCATION', 'EXCHANGE', 'ARCHIVE', 'UNARCHIVE', 'FILEFORMAT', 'TOUCH', 'COMPACT', 'CONCATENATE', 'CHANGE', 'CASCADE', 'RESTRICT', 'CLUSTERED', 'SORTED', 'PURGE', 'INPUTFORMAT', 'OUTPUTFORMAT', DATABASE, DATABASES, 'DFS', 'TRUNCATE', 'ANALYZE', 'COMPUTE', 'LIST', 'STATISTICS', 'PARTITIONED', 'EXTERNAL', 'DEFINED', 'REVOKE', 'GRANT', 'LOCK', 'UNLOCK', 'MSCK', 'REPAIR', 'RECOVER', 'EXPORT', 'IMPORT', 'LOAD', 'ROLE', 'ROLES', 'COMPACTIONS', 'PRINCIPALS', 'TRANSACTIONS', 'INDEX', 'INDEXES', 'LOCKS', 'OPTION', 'ANTI', 'LOCAL', 'INPATH', IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 41)

== SQL ==
CREATE TABLE tbl(a int) PARTITIONED BY (b) STORED AS parquet
-----------------------------------------^^^

I will test the behavior of Hive later

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Hive 2.3.2, will throw exception like following:

> CREATE TABLE tbl(a int) PARTITIONED BY (b) STORED AS parquet;
Error: Error while compiling statement: FAILED: ParseException line 1:41 cannot recognize input near ')' 'STORED' 'AS' in column type (state=42000,code=40000)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your investigation, Hao!
I think here to throw the exception for partition column type missing is the right behavior.
The current behavior should be the regression bug involves from #23376, it droped the partition column without type:

spark-sql> CREATE TABLE tbl(a int) PARTITIONED BY (b) STORED AS parquet;
Time taken: 1.856 seconds
spark-sql> desc tbl;
a	int	NULL
Time taken: 0.46 seconds, Fetched 1 row(s)

Could you also test the behavior in Hive 3.0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, Yuanjian, thanks for the reasoning.
Agree with you, Spark 2.4 and previous version, will throw exception for partition column type missing, #23376 brought the current behavior, this PR intend to check this case and throw exception.

Don't have an hive 3 environment on hand , so I add a unit test case in Hive 3.1 branch and run it , exception will be thrown as hive 2

java.lang.RuntimeException: CREATE TABLE tbl(a int) PARTITIONED BY (b) STORED AS parquet; 
failed: (responseCode = 40000, errorMessage = FAILED: ParseException line 1:41 cannot recognize input near ')' 'STORED' 'AS' in column type, SQLState = 42000, exception = line 1:41 cannot recognize input near ')' 'STORED' 'AS' in column type)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for checking, then I'm sure it's a regression bug. Is it possible to fix in SqlBase.g4?
If that's hard to do, then I'm ok to add this logic in SparkSqlParser.

Copy link
Contributor Author

@lidinghao lidinghao Aug 12, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, Yuanjian, I have tried some solutions which fix this case in SqlBase.g4 before and after this PR submitted, but the result isn't good, here is the summary of all the solutions.

If we try to fix this in SqlBase.g4 , i.e. during syntax analysis phase, we have to change createHiveTable syntax description to make Antlr split the analysis path into two branch, one for CTAS which accept syntax that define partition column without data type and must have a AS query sub-clause at end of the DDL sentence. Another branch for CT which accept syntax that define partition column with data type and cannot have a AS query sub-clause at end of the DDL sentence.

When user use a illegal CT DDL with partition columns data type missed, the syntax analyzer will match this with first branch, and give a misleading error messages for user. And if user use CTAS DDL but has partition columns data type defined, the syntax analyzer will match this with second branch, also give misleading error messages.

If we fix this in SparkSqlParser, i.e. during semantic analysis phase, we can not only hand the illegal CTAS&CT DDL case as Antlr, but also give user a explicit and useful error message.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, thanks for trying! Then I'm ok to fix this behavior in SparkSqlParser.

// When creating partitioned table, we must specify data type for the partition columns.
if (Option(ctx.partitionColumnNames).isDefined) {
val errorMessage = "Must specify a data type for each partition column while creating " +
"Hive partitioned table."
operationNotAllowed(errorMessage, ctx)
}

CreateTable(tableDesc, mode, None)
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -548,6 +548,14 @@ class HiveDDLSuite
assert(e.message == "Found duplicate column(s) in the table definition of `default`.`tbl`: `a`")
}

test("create partitioned table without specifying data type for the partition columns") {
val e = intercept[AnalysisException] {
sql("CREATE TABLE tbl(a int) PARTITIONED BY (b) STORED AS parquet")
}
assert(e.message.contains("Must specify a data type for each partition column while creating " +
"Hive partitioned table."))
}

test("add/drop partition with location - managed table") {
val tab = "tab_with_partitions"
withTempDir { tmpDir =>
Expand Down