Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
set column size to 16
  • Loading branch information
yaooqinn committed Aug 26, 2020
commit bd3e6925427ae3d97ba84382e48b06e5a56cd254
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,8 @@ private[hive] class SparkGetColumnsOperation(
* For array, map, string, and binaries, the column size is variable, return null as unknown.
*/
private def getColumnSize(typ: DataType): Option[Int] = typ match {
case dt @ (BooleanType | _: NumericType | DateType | TimestampType) => Some(dt.defaultSize)
case dt @ (BooleanType | _: NumericType | DateType | TimestampType | CalendarIntervalType) =>
Some(dt.defaultSize)
case StructType(fields) =>
val sizeArr = fields.map(f => getColumnSize(f.dataType))
if (sizeArr.contains(None)) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -349,7 +349,7 @@ class SparkMetadataOperationSuite extends HiveThriftJdbcTest {
assert(rowSet.getString("COLUMN_NAME") === "i")
assert(rowSet.getInt("DATA_TYPE") === java.sql.Types.OTHER)
assert(rowSet.getString("TYPE_NAME").equalsIgnoreCase(CalendarIntervalType.sql))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we treat it as string type, can we still report the TYPE_NAME as INTERVAL?

Copy link
Member Author

@yaooqinn yaooqinn Aug 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess that all the meta operations should do nothing else but specifically describe how all the objects(database/table /columns etc) stored in the Spark system.

image

If we change the column meta here from INTERVAL to STRING, so the column here becomes orderable and comparable?(JDBC users may guess), they may have no idea about what they are dealing with and get a completely different awareness of the meta-information comparing to those users who use spark-sql self-contained applications.

assert(rowSet.getInt("COLUMN_SIZE") === 0)
assert(rowSet.getInt("COLUMN_SIZE") === CalendarIntervalType.defaultSize)
assert(rowSet.getInt("DECIMAL_DIGITS") === 0)
assert(rowSet.getInt("NUM_PREC_RADIX") === 0)
assert(rowSet.getInt("NULLABLE") === 0)
Expand Down