-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-26538][SQL] Set default precision and scale for elements of postgres numeric array #23456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 3 commits
b004ee3
31b0b04
77bbcb5
c72e214
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -60,7 +60,12 @@ private object PostgresDialect extends JdbcDialect { | |
| case "bytea" => Some(BinaryType) | ||
| case "timestamp" | "timestamptz" | "time" | "timetz" => Some(TimestampType) | ||
| case "date" => Some(DateType) | ||
| case "numeric" | "decimal" => Some(DecimalType.bounded(precision, scale)) | ||
| case "numeric" | "decimal" => if (precision > 0) { | ||
| Some(DecimalType.bounded(precision, scale)) | ||
| } else { | ||
| // SPARK-26538: handle numeric without explicit precision and scale. | ||
| Some(DecimalType. SYSTEM_DEFAULT) | ||
| } | ||
|
||
| case _ => None | ||
| } | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to confirm just in case; we don't check
scalein this pr? Probably, this might be related to the discussion: #23458 (comment)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Postgres doc says that
What the postgres jdbc driver returned in case of
numericwas 0 for both scale and precision.The condition proposed in the linked ticket and currently used here was roughly
precision > 0 || scale > 0, but I can not come up with a valid case having precision <=0 while having scale > 0.Is there another case where we would have a decimal with precision 0?
Could someone explain?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, but, I think we'd be better to add the check
scale > 0, too, just for safeguards.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is fine actually,. We do not support decimals with precision < 0, so this is most likely enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I still agree with @maropu (#23456 (comment)), but it looks okay because this is
PostgresDialect.scala.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we'd add
|| scale > 0, we'd allow a percision <= 0,which doesn't make any sense and it is not supported by Spark's decimal. So I think this is fine.