Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
74 commits
Select commit Hold shift + click to select a range
7dec5eb
[SPARK-47705][INFRA] Sort LogKey alphabetically and build a test to e…
dtenedor Apr 3, 2024
6a0555c
[SPARK-47700][SQL] Fix formatting of error messages with treeNode
jchen5 Apr 3, 2024
49eefc5
[SPARK-47722][SS] Wait until RocksDB background work finish before cl…
WweiL Apr 3, 2024
fbe6b1d
[SPARK-47721][DOC] Guidelines for the Structured Logging Framework
gengliangwang Apr 3, 2024
e3aab8c
[SPARK-47210][SQL] Addition of implicit casting without indeterminate…
mihailomilosevic2001 Apr 3, 2024
d87ac8e
[SPARK-47708][CONNECT] Do not log gRPC exception to stderr in PySpark
nemanja-boric-databricks Apr 4, 2024
447f8af
[SPARK-47720][CORE] Update `spark.speculation.multiplier` to 3 and `s…
dongjoon-hyun Apr 4, 2024
678aeb7
[SPARK-47683][PYTHON][BUILD] Decouple PySpark core API to pyspark.cor…
HyukjinKwon Apr 4, 2024
c25fd93
[SPARK-47705][INFRA][FOLLOWUP] Sort LogKey alphabetically and build a…
panbingkun Apr 4, 2024
d272a1b
[SPARK-47724][PYTHON][TESTS] Add an environment variable for testing …
HyukjinKwon Apr 4, 2024
d75c775
[SPARK-46812][PYTHON][TESTS][FOLLOWUP] Skip `pandas`-required tests i…
dongjoon-hyun Apr 4, 2024
3f6ac60
[SPARK-47577][CORE][PART1] Migrate logError with variables to structu…
gengliangwang Apr 4, 2024
f6999df
[SPARK-47081][CONNECT] Support Query Execution Progress
grundprinzip Apr 4, 2024
bffb02d
[SPARK-47565][PYTHON] PySpark worker pool crash resilience
Apr 4, 2024
3b8aea3
Revert "[SPARK-47708][CONNECT] Do not log gRPC exception to stderr in…
nemanja-boric-databricks Apr 4, 2024
5f9f5db
[SPARK-47689][SQL][FOLLOWUP] More accurate file path in TASK_WRITE_FA…
cloud-fan Apr 4, 2024
5ca3467
[SPARK-47729][PYTHON][TESTS] Get the proper default port for pyspark-…
HyukjinKwon Apr 4, 2024
25fc67f
[SPARK-47728][DOC] Document G1 Concurrent GC metrics
LucaCanali Apr 4, 2024
e3405c1
[SPARK-47610][CONNECT][FOLLOWUP] Add -Dio.netty.tryReflectionSetAcces…
pan3793 Apr 4, 2024
3fd0cd6
[SPARK-47598][CORE] MLLib: Migrate logError with variables to structu…
panbingkun Apr 4, 2024
240923c
[SPARK-46812][PYTHON][TESTS][FOLLOWUP] Check should_test_connect and …
dongjoon-hyun Apr 4, 2024
fb96b1a
[SPARK-47723][CORE][TESTS] Introduce a tool that can sort alphabetica…
panbingkun Apr 5, 2024
404d58c
[SPARK-47081][CONNECT][FOLLOW-UP] Add the `shell` module into PyPI pa…
HyukjinKwon Apr 5, 2024
b9ca91d
[SPARK-47712][CONNECT] Allow connect plugins to create and process Da…
tomvanbussel Apr 5, 2024
0107435
[SPARK-47734][PYTHON][TESTS] Fix flaky DataFrame.writeStream doctest …
JoshRosen Apr 5, 2024
d5620cb
[SPARK-47289][SQL] Allow extensions to log extended information in ex…
parthchandra Apr 5, 2024
aeb082e
[SPARK-47081][CONNECT][TESTS][FOLLOW-UP] Skip the flaky doctests for now
HyukjinKwon Apr 5, 2024
97e63ff
[SPARK-47735][PYTHON][TESTS] Make pyspark.testing.connectutils compat…
HyukjinKwon Apr 5, 2024
12d0367
[SPARK-47724][PYTHON][TESTS][FOLLOW-UP] Make testing script to inheri…
HyukjinKwon Apr 5, 2024
6bd0ccf
[SPARK-47511][SQL][FOLLOWUP] Rename the config REPLACE_NULLIF_USING_W…
cloud-fan Apr 5, 2024
c34baeb
[SPARK-47719][SQL] Change spark.sql.legacy.timeParserPolicy default t…
srielau Apr 5, 2024
18072b5
[SPARK-47577][CORE][PART2] Migrate logError with variables to structu…
gengliangwang Apr 5, 2024
1efbf43
[SPARK-47310][SS] Add micro-benchmark for merge operations for multip…
anishshri-db Apr 5, 2024
d1ace24
[SPARK-47582][SQL] Migrate Catalyst logInfo with variables to structu…
dtenedor Apr 5, 2024
11abc64
[SPARK-47094][SQL] SPJ : Dynamically rebalance number of buckets when…
szehon-ho Apr 6, 2024
42dc815
[SPARK-47743][CORE] Use milliseconds as the time unit in logging
gengliangwang Apr 6, 2024
7385f19
[SPARK-47592][CORE] Connector module: Migrate logError with variables…
panbingkun Apr 6, 2024
d69df59
[SPARK-47738][BUILD] Upgrade Kafka to 3.7.0
panbingkun Apr 6, 2024
60a3fbc
[SPARK-47727][PYTHON] Make SparkConf to root level to for both SparkS…
HyukjinKwon Apr 6, 2024
644687b
[SPARK-47709][BUILD] Upgrade tink to 1.13.0
LuciferYang Apr 6, 2024
4d9dbb3
[SPARK-46722][CONNECT][SS][TESTS][FOLLOW-UP] Drop the tables after te…
HyukjinKwon Apr 7, 2024
c11585a
[SPARK-47751][PYTHON][CONNECT] Make pyspark.worker_utils compatible w…
HyukjinKwon Apr 7, 2024
d743012
[SPARK-47753][PYTHON][CONNECT][TESTS] Make pyspark.testing compatible…
HyukjinKwon Apr 7, 2024
f7dff4a
[SPARK-47752][PS][CONNECT] Make pyspark.pandas compatible with pyspar…
HyukjinKwon Apr 7, 2024
e92e8f5
[SPARK-47744] Add support for negative-valued bytes in range encoder
neilramaswamy Apr 7, 2024
0c992b2
[SPARK-47755][CONNECT] Pivot should fail when the number of distinct …
zhengruifeng Apr 7, 2024
b299b2b
[SPARK-47299][PYTHON][DOCS] Use the same `versions.json` in the dropd…
panbingkun Apr 8, 2024
cc6c0eb
[MINOR][TESTS] Deduplicate test cases `test_parse_datatype_string`
HyukjinKwon Apr 8, 2024
ad2367c
[MINOR][PYTHON][SS][TESTS] Drop the tables after being used at `test_…
HyukjinKwon Apr 8, 2024
f576b85
[SPARK-47541][SQL] Collated strings in complex types supporting opera…
nikolamand-db Apr 8, 2024
d55bb61
[SPARK-47558][SS] State TTL support for ValueState
sahnib Apr 8, 2024
3a39ac2
[SPARK-47713][SQL][CONNECT] Fix a self-join failure
zhengruifeng Apr 8, 2024
eb8e997
[SPARK-47657][SQL] Implement collation filter push down support per f…
stefankandic Apr 8, 2024
f0d8f82
[SPARK-47750][DOCS][SQL] Postgres: Document Mapping Spark SQL Data Ty…
yaooqinn Apr 8, 2024
211afd4
[MINOR][PYTHON][CONNECT][TESTS] Enable `MapInPandasParityTests.test_d…
zhengruifeng Apr 8, 2024
f94d95d
[SPARK-47762][PYTHON][CONNECT] Add pyspark.sql.connect.protobuf into …
HyukjinKwon Apr 8, 2024
29d077f
[SPARK-47748][BUILD] Upgrade `zstd-jni` to 1.5.6-2
panbingkun Apr 8, 2024
60806c6
[SPARK-47746] Implement ordinal-based range encoding in the RocksDBSt…
neilramaswamy Apr 8, 2024
134a139
[SPARK-47681][SQL] Add schema_of_variant expression
chenhao-db Apr 8, 2024
abb7b04
[SPARK-47504][SQL] Resolve AbstractDataType simpleStrings for StringT…
mihailomilosevic2001 Apr 8, 2024
91b2331
[WIP] ListStateTTL implementation
ericm-db Apr 8, 2024
479392a
adding log lines
ericm-db Apr 8, 2024
7aab43e
test cases pass
ericm-db Apr 8, 2024
71f960d
spacing
ericm-db Apr 8, 2024
998764c
using NextIterator instead
ericm-db Apr 8, 2024
1dcb7d8
refactor feedback
ericm-db Apr 9, 2024
47867e7
undoing unnecessary change
ericm-db Apr 9, 2024
cfd30c3
refactor get_ttl_value
ericm-db Apr 9, 2024
4a19cb7
refactor test case
ericm-db Apr 9, 2024
993125c
specific doc for clearIfExpired
ericm-db Apr 9, 2024
fd5200f
moving isExpired to common place
ericm-db Apr 9, 2024
d43ffb1
refactoring to use common utils
ericm-db Apr 9, 2024
30f6094
updating interface header
ericm-db Apr 9, 2024
e9376d9
Map State TTL, Initial Commit
ericm-db Apr 9, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
[SPARK-47719][SQL] Change spark.sql.legacy.timeParserPolicy default t…
…o CORRECTED

### What changes were proposed in this pull request?

We changed the time parser policy in Spark 3.0.0.
The config has since defaulted to raise an exception if there is a potential conflict between teh legacy and the new policy.
Spark 4.0.0 is a good time to default to the new policy

### Why are the changes needed?

Move the product forward and retire legacy behavior over time.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Run existing unit tests and verify changes.

### Was this patch authored or co-authored using generative AI tooling?

No

Closes apache#45859 from srielau/SPARK-47719-parser-policy-default-to-corrected.

Lead-authored-by: Serge Rielau <[email protected]>
Co-authored-by: Wenchen Fan <[email protected]>
Signed-off-by: Gengliang Wang <[email protected]>
  • Loading branch information
2 people authored and gengliangwang committed Apr 5, 2024
commit c34baebb36d4e4c8895085b3114da8dc07165469
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,9 @@ class ClientE2ETestSuite extends RemoteSparkSession with SQLHelper with PrivateM

for (enrichErrorEnabled <- Seq(false, true)) {
test(s"cause exception - ${enrichErrorEnabled}") {
withSQLConf("spark.sql.connect.enrichError.enabled" -> enrichErrorEnabled.toString) {
withSQLConf(
"spark.sql.connect.enrichError.enabled" -> enrichErrorEnabled.toString,
"spark.sql.legacy.timeParserPolicy" -> "EXCEPTION") {
val ex = intercept[SparkUpgradeException] {
spark
.sql("""
Expand Down
2 changes: 2 additions & 0 deletions docs/sql-migration-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,8 @@ license: |
- Since Spark 4.0, MySQL JDBC datasource will read FLOAT as FloatType, while in Spark 3.5 and previous, it was read as DoubleType. To restore the previous behavior, you can cast the column to the old type.
- Since Spark 4.0, MySQL JDBC datasource will read BIT(n > 1) as BinaryType, while in Spark 3.5 and previous, read as LongType. To restore the previous behavior, set `spark.sql.legacy.mysql.bitArrayMapping.enabled` to `true`.
- Since Spark 4.0, MySQL JDBC datasource will write ShortType as SMALLINT, while in Spark 3.5 and previous, write as INTEGER. To restore the previous behavior, you can replace the column with IntegerType whenever before writing.
- Since Spark 4.0, The default value for `spark.sql.legacy.ctePrecedencePolicy` has been changed from `EXCEPTION` to `CORRECTED`. Instead of raising an error, inner CTE definitions take precedence over outer definitions.
- Since Spark 4.0, The default value for `spark.sql.legacy.timeParserPolicy` has been changed from `EXCEPTION` to `CORRECTED`. Instead of raising an `INCONSISTENT_BEHAVIOR_CROSS_VERSION` error, `CANNOT_PARSE_TIMESTAMP` will be raised if ANSI mode is enable. `NULL` will be returned if ANSI mode is disabled. See [Datetime Patterns for Formatting and Parsing](sql-ref-datetime-pattern.html).

## Upgrading from Spark SQL 3.5.1 to 3.5.2

Expand Down
1 change: 1 addition & 0 deletions python/pyspark/sql/tests/connect/test_connect_session.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,7 @@ def test_error_enrichment_jvm_stacktrace(self):
{
"spark.sql.connect.enrichError.enabled": True,
"spark.sql.pyspark.jvmStacktrace.enabled": False,
"spark.sql.legacy.timeParserPolicy": "EXCEPTION",
}
):
with self.sql_conf({"spark.sql.connect.serverStacktrace.enabled": False}):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,6 @@ private[sql] object DefaultSqlApiConf extends SqlApiConf {
override def charVarcharAsString: Boolean = false
override def datetimeJava8ApiEnabled: Boolean = false
override def sessionLocalTimeZone: String = TimeZone.getDefault.getID
override def legacyTimeParserPolicy: LegacyBehaviorPolicy.Value = LegacyBehaviorPolicy.EXCEPTION
override def legacyTimeParserPolicy: LegacyBehaviorPolicy.Value = LegacyBehaviorPolicy.CORRECTED
override def defaultStringType: StringType = StringType
}
Original file line number Diff line number Diff line change
Expand Up @@ -4027,13 +4027,13 @@ object SQLConf {
.doc("When LEGACY, java.text.SimpleDateFormat is used for formatting and parsing " +
"dates/timestamps in a locale-sensitive manner, which is the approach before Spark 3.0. " +
"When set to CORRECTED, classes from java.time.* packages are used for the same purpose. " +
"The default value is EXCEPTION, RuntimeException is thrown when we will get different " +
"results.")
"When set to EXCEPTION, RuntimeException is thrown when we will get different " +
"results. The default is CORRECTED.")
.version("3.0.0")
.stringConf
.transform(_.toUpperCase(Locale.ROOT))
.checkValues(LegacyBehaviorPolicy.values.map(_.toString))
.createWithDefault(LegacyBehaviorPolicy.EXCEPTION.toString)
.createWithDefault(LegacyBehaviorPolicy.CORRECTED.toString)

val LEGACY_ARRAY_EXISTS_FOLLOWS_THREE_VALUED_LOGIC =
buildConf("spark.sql.legacy.followThreeValuedLogicInArrayExists")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ class DateFormatterSuite extends DatetimeFormatterSuite {
val formatter = DateFormatter("MM-dd")
// The date parser in 2.4 accepts 1970-02-29 and turn it into 1970-03-01, so we should get a
// SparkUpgradeException here.
intercept[SparkUpgradeException](formatter.parse("02-29"))
intercept[DateTimeException](formatter.parse("02-29"))
}

test("SPARK-36418: default parsing w/o pattern") {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ import org.scalatest.matchers.must.Matchers
import org.apache.spark.{SparkFunSuite, SparkIllegalArgumentException, SparkUpgradeException}
import org.apache.spark.sql.catalyst.plans.SQLHelper
import org.apache.spark.sql.catalyst.util.DateTimeTestUtils.{date, UTC}
import org.apache.spark.sql.internal.SQLConf

trait DatetimeFormatterSuite extends SparkFunSuite with SQLHelper with Matchers {
import DateTimeFormatterHelper._
Expand Down Expand Up @@ -99,34 +100,36 @@ trait DatetimeFormatterSuite extends SparkFunSuite with SQLHelper with Matchers
}

test("SPARK-31939: Fix Parsing day of year when year field pattern is missing") {
// resolved to queryable LocaleDate or fail directly
assertEqual("yyyy-dd-DD", "2020-29-60", date(2020, 2, 29))
assertError("yyyy-dd-DD", "2020-02-60",
"Field DayOfMonth 29 differs from DayOfMonth 2 derived from 2020-02-29")
assertEqual("yyyy-MM-DD", "2020-02-60", date(2020, 2, 29))
assertError("yyyy-MM-DD", "2020-03-60",
"Field MonthOfYear 2 differs from MonthOfYear 3 derived from 2020-02-29")
assertEqual("yyyy-MM-dd-DD", "2020-02-29-60", date(2020, 2, 29))
assertError("yyyy-MM-dd-DD", "2020-03-01-60",
"Field DayOfYear 61 differs from DayOfYear 60 derived from 2020-03-01")
assertEqual("yyyy-DDD", "2020-366", date(2020, 12, 31))
assertError("yyyy-DDD", "2019-366",
"Invalid date 'DayOfYear 366' as '2019' is not a leap year")
withSQLConf(SQLConf.LEGACY_TIME_PARSER_POLICY.key -> "EXCEPTION") {
// resolved to queryable LocaleDate or fail directly
assertEqual("yyyy-dd-DD", "2020-29-60", date(2020, 2, 29))
assertError("yyyy-dd-DD", "2020-02-60",
"Field DayOfMonth 29 differs from DayOfMonth 2 derived from 2020-02-29")
assertEqual("yyyy-MM-DD", "2020-02-60", date(2020, 2, 29))
assertError("yyyy-MM-DD", "2020-03-60",
"Field MonthOfYear 2 differs from MonthOfYear 3 derived from 2020-02-29")
assertEqual("yyyy-MM-dd-DD", "2020-02-29-60", date(2020, 2, 29))
assertError("yyyy-MM-dd-DD", "2020-03-01-60",
"Field DayOfYear 61 differs from DayOfYear 60 derived from 2020-03-01")
assertEqual("yyyy-DDD", "2020-366", date(2020, 12, 31))
assertError("yyyy-DDD", "2019-366",
"Invalid date 'DayOfYear 366' as '2019' is not a leap year")

// unresolved and need to check manually(SPARK-31939 fixed)
assertEqual("DDD", "365", date(1970, 12, 31))
assertError("DDD", "366",
"Invalid date 'DayOfYear 366' as '1970' is not a leap year")
assertEqual("MM-DD", "03-60", date(1970, 3))
assertError("MM-DD", "02-60",
"Field MonthOfYear 2 differs from MonthOfYear 3 derived from 1970-03-01")
assertEqual("MM-dd-DD", "02-28-59", date(1970, 2, 28))
assertError("MM-dd-DD", "02-28-60",
"Field MonthOfYear 2 differs from MonthOfYear 3 derived from 1970-03-01")
assertError("MM-dd-DD", "02-28-58",
"Field DayOfMonth 28 differs from DayOfMonth 27 derived from 1970-02-27")
assertEqual("dd-DD", "28-59", date(1970, 2, 28))
assertError("dd-DD", "27-59",
"Field DayOfMonth 27 differs from DayOfMonth 28 derived from 1970-02-28")
// unresolved and need to check manually(SPARK-31939 fixed)
assertEqual("DDD", "365", date(1970, 12, 31))
assertError("DDD", "366",
"Invalid date 'DayOfYear 366' as '1970' is not a leap year")
assertEqual("MM-DD", "03-60", date(1970, 3))
assertError("MM-DD", "02-60",
"Field MonthOfYear 2 differs from MonthOfYear 3 derived from 1970-03-01")
assertEqual("MM-dd-DD", "02-28-59", date(1970, 2, 28))
assertError("MM-dd-DD", "02-28-60",
"Field MonthOfYear 2 differs from MonthOfYear 3 derived from 1970-03-01")
assertError("MM-dd-DD", "02-28-58",
"Field DayOfMonth 28 differs from DayOfMonth 27 derived from 1970-02-27")
assertEqual("dd-DD", "28-59", date(1970, 2, 28))
assertError("dd-DD", "27-59",
"Field DayOfMonth 27 differs from DayOfMonth 28 derived from 1970-02-28")
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -36,23 +36,25 @@ class TimestampFormatterSuite extends DatetimeFormatterSuite {
override protected def useDateFormatter: Boolean = false

test("parsing timestamps using time zones") {
val localDate = "2018-12-02T10:11:12.001234"
val expectedMicros = Map(
"UTC" -> 1543745472001234L,
PST.getId -> 1543774272001234L,
CET.getId -> 1543741872001234L,
"Africa/Dakar" -> 1543745472001234L,
"America/Los_Angeles" -> 1543774272001234L,
"Asia/Urumqi" -> 1543723872001234L,
"Asia/Hong_Kong" -> 1543716672001234L,
"Europe/Brussels" -> 1543741872001234L)
outstandingTimezonesIds.foreach { zoneId =>
val formatter = TimestampFormatter(
"yyyy-MM-dd'T'HH:mm:ss.SSSSSS",
getZoneId(zoneId),
isParsing = true)
val microsSinceEpoch = formatter.parse(localDate)
assert(microsSinceEpoch === expectedMicros(zoneId))
withSQLConf(SQLConf.LEGACY_TIME_PARSER_POLICY.key -> "EXCEPTION") {
val localDate = "2018-12-02T10:11:12.001234"
val expectedMicros = Map(
"UTC" -> 1543745472001234L,
PST.getId -> 1543774272001234L,
CET.getId -> 1543741872001234L,
"Africa/Dakar" -> 1543745472001234L,
"America/Los_Angeles" -> 1543774272001234L,
"Asia/Urumqi" -> 1543723872001234L,
"Asia/Hong_Kong" -> 1543716672001234L,
"Europe/Brussels" -> 1543741872001234L)
outstandingTimezonesIds.foreach { zoneId =>
val formatter = TimestampFormatter(
"yyyy-MM-dd'T'HH:mm:ss.SSSSSS",
getZoneId(zoneId),
isParsing = true)
val microsSinceEpoch = formatter.parse(localDate)
assert(microsSinceEpoch === expectedMicros(zoneId))
}
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,13 @@ select to_timestamp('1', 'yy')
-- !query schema
struct<>
-- !query output
org.apache.spark.SparkUpgradeException
org.apache.spark.SparkDateTimeException
{
"errorClass" : "INCONSISTENT_BEHAVIOR_CROSS_VERSION.PARSE_DATETIME_BY_NEW_PARSER",
"sqlState" : "42K0B",
"errorClass" : "CANNOT_PARSE_TIMESTAMP",
"sqlState" : "22007",
"messageParameters" : {
"config" : "\"spark.sql.legacy.timeParserPolicy\"",
"datetime" : "'1'"
"ansiConfig" : "\"spark.sql.ansi.enabled\"",
"message" : "Text '1' could not be parsed at index 0"
}
}

Expand All @@ -45,13 +45,13 @@ select to_timestamp('123', 'yy')
-- !query schema
struct<>
-- !query output
org.apache.spark.SparkUpgradeException
org.apache.spark.SparkDateTimeException
{
"errorClass" : "INCONSISTENT_BEHAVIOR_CROSS_VERSION.PARSE_DATETIME_BY_NEW_PARSER",
"sqlState" : "42K0B",
"errorClass" : "CANNOT_PARSE_TIMESTAMP",
"sqlState" : "22007",
"messageParameters" : {
"config" : "\"spark.sql.legacy.timeParserPolicy\"",
"datetime" : "'123'"
"ansiConfig" : "\"spark.sql.ansi.enabled\"",
"message" : "Text '123' could not be parsed, unparsed text found at index 2"
}
}

Expand All @@ -61,13 +61,13 @@ select to_timestamp('1', 'yyy')
-- !query schema
struct<>
-- !query output
org.apache.spark.SparkUpgradeException
org.apache.spark.SparkDateTimeException
{
"errorClass" : "INCONSISTENT_BEHAVIOR_CROSS_VERSION.PARSE_DATETIME_BY_NEW_PARSER",
"sqlState" : "42K0B",
"errorClass" : "CANNOT_PARSE_TIMESTAMP",
"sqlState" : "22007",
"messageParameters" : {
"config" : "\"spark.sql.legacy.timeParserPolicy\"",
"datetime" : "'1'"
"ansiConfig" : "\"spark.sql.ansi.enabled\"",
"message" : "Text '1' could not be parsed at index 0"
}
}

Expand Down Expand Up @@ -110,13 +110,13 @@ select to_timestamp('9', 'DD')
-- !query schema
struct<>
-- !query output
org.apache.spark.SparkUpgradeException
org.apache.spark.SparkDateTimeException
{
"errorClass" : "INCONSISTENT_BEHAVIOR_CROSS_VERSION.PARSE_DATETIME_BY_NEW_PARSER",
"sqlState" : "42K0B",
"errorClass" : "CANNOT_PARSE_TIMESTAMP",
"sqlState" : "22007",
"messageParameters" : {
"config" : "\"spark.sql.legacy.timeParserPolicy\"",
"datetime" : "'9'"
"ansiConfig" : "\"spark.sql.ansi.enabled\"",
"message" : "Text '9' could not be parsed at index 0"
}
}

Expand All @@ -142,13 +142,13 @@ select to_timestamp('9', 'DDD')
-- !query schema
struct<>
-- !query output
org.apache.spark.SparkUpgradeException
org.apache.spark.SparkDateTimeException
{
"errorClass" : "INCONSISTENT_BEHAVIOR_CROSS_VERSION.PARSE_DATETIME_BY_NEW_PARSER",
"sqlState" : "42K0B",
"errorClass" : "CANNOT_PARSE_TIMESTAMP",
"sqlState" : "22007",
"messageParameters" : {
"config" : "\"spark.sql.legacy.timeParserPolicy\"",
"datetime" : "'9'"
"ansiConfig" : "\"spark.sql.ansi.enabled\"",
"message" : "Text '9' could not be parsed at index 0"
}
}

Expand All @@ -158,13 +158,13 @@ select to_timestamp('99', 'DDD')
-- !query schema
struct<>
-- !query output
org.apache.spark.SparkUpgradeException
org.apache.spark.SparkDateTimeException
{
"errorClass" : "INCONSISTENT_BEHAVIOR_CROSS_VERSION.PARSE_DATETIME_BY_NEW_PARSER",
"sqlState" : "42K0B",
"errorClass" : "CANNOT_PARSE_TIMESTAMP",
"sqlState" : "22007",
"messageParameters" : {
"config" : "\"spark.sql.legacy.timeParserPolicy\"",
"datetime" : "'99'"
"ansiConfig" : "\"spark.sql.ansi.enabled\"",
"message" : "Text '99' could not be parsed at index 0"
}
}

Expand Down Expand Up @@ -284,17 +284,9 @@ org.apache.spark.SparkDateTimeException
-- !query
select from_csv('2018-366', 'date Date', map('dateFormat', 'yyyy-DDD'))
-- !query schema
struct<>
struct<from_csv(2018-366):struct<date:date>>
-- !query output
org.apache.spark.SparkUpgradeException
{
"errorClass" : "INCONSISTENT_BEHAVIOR_CROSS_VERSION.PARSE_DATETIME_BY_NEW_PARSER",
"sqlState" : "42K0B",
"messageParameters" : {
"config" : "\"spark.sql.legacy.timeParserPolicy\"",
"datetime" : "'2018-366'"
}
}
{"date":null}


-- !query
Expand Down
Loading