Skip to content

Commit a3d8394

Browse files
gatorsmileHyukjinKwon
authored andcommitted
[SPARK-31351][DOC] Migration Guide Auditing for Spark 3.0 Release
### What changes were proposed in this pull request? This PR is to audit the migration guides in Spark 3.0 release: - correct the grammar errors - clean up some items - replace HTML table by markdown table ### Why are the changes needed? N/A ### Does this PR introduce any user-facing change? No ### How was this patch tested? Screenshot: ![screencapture-127-0-0-1-4000-sql-migration-guide-html-2020-04-04-21_36_29](https://user-images.githubusercontent.com/11567269/78467043-9477d800-76bd-11ea-8ab0-3d51ea5e9fa5.png) ![Screen Shot 2020-04-04 at 9 28 13 PM](https://user-images.githubusercontent.com/11567269/78467045-98a3f580-76bd-11ea-9e4b-927bf12e683a.png) ![Screen Shot 2020-04-04 at 9 28 02 PM](https://user-images.githubusercontent.com/11567269/78467046-98a3f580-76bd-11ea-8ea3-9f13cb8d200b.png) ![Screen Shot 2020-04-04 at 9 21 40 PM](https://user-images.githubusercontent.com/11567269/78467047-993c8c00-76bd-11ea-8c29-91afc68eb590.png) Closes apache#28125 from gatorsmile/updateMigrationGuide3.0. Authored-by: gatorsmile <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
1 parent 0fc859b commit a3d8394

File tree

5 files changed

+164
-315
lines changed

5 files changed

+164
-315
lines changed

docs/core-migration-guide.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ license: |
2626

2727
- The `org.apache.spark.ExecutorPlugin` interface and related configuration has been replaced with
2828
`org.apache.spark.plugin.SparkPlugin`, which adds new functionality. Plugins using the old
29-
interface need to be modified to extend the new interfaces. Check the
29+
interface must be modified to extend the new interfaces. Check the
3030
[Monitoring](monitoring.html) guide for more details.
3131

3232
- Deprecated method `TaskContext.isRunningLocally` has been removed. Local execution was removed and it always has returned `false`.
@@ -35,6 +35,6 @@ license: |
3535

3636
- Deprecated method `AccumulableInfo.apply` have been removed because creating `AccumulableInfo` is disallowed.
3737

38-
- Event log file will be written as UTF-8 encoding, and Spark History Server will replay event log files as UTF-8 encoding. Previously Spark writes event log file as default charset of driver JVM process, so Spark History Server of Spark 2.x is needed to read the old event log files in case of incompatible encoding.
38+
- Event log file will be written as UTF-8 encoding, and Spark History Server will replay event log files as UTF-8 encoding. Previously Spark wrote the event log file as default charset of driver JVM process, so Spark History Server of Spark 2.x is needed to read the old event log files in case of incompatible encoding.
3939

40-
- A new protocol for fetching shuffle blocks is used. It's recommended that external shuffle services be upgraded when running Spark 3.0 apps. Old external shuffle services can still be used by setting the configuration `spark.shuffle.useOldFetchProtocol` to `true`. Otherwise, Spark may run into errors with messages like `IllegalArgumentException: Unexpected message type: <number>`.
40+
- A new protocol for fetching shuffle blocks is used. It's recommended that external shuffle services be upgraded when running Spark 3.0 apps. You can still use old external shuffle services by setting the configuration `spark.shuffle.useOldFetchProtocol` to `true`. Otherwise, Spark may run into errors with messages like `IllegalArgumentException: Unexpected message type: <number>`.

docs/css/main.css

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,37 @@
22
Author's custom styles
33
========================================================================== */
44

5+
table {
6+
margin: 15px 0;
7+
padding: 0;
8+
}
9+
10+
table tr {
11+
border-top: 1px solid #cccccc;
12+
background-color: white;
13+
margin: 0;
14+
padding: 0;
15+
}
16+
17+
table tr:nth-child(2n) {
18+
background-color: #F1F4F5;
19+
}
20+
21+
table tr th {
22+
font-weight: bold;
23+
border: 1px solid #cccccc;
24+
text-align: left;
25+
margin: 0;
26+
padding: 6px 13px;
27+
}
28+
29+
table tr td {
30+
border: 1px solid #cccccc;
31+
text-align: left;
32+
margin: 0;
33+
padding: 6px 13px;
34+
}
35+
536
.navbar .brand {
637
height: 50px;
738
width: 110px;

docs/pyspark-migration-guide.md

Lines changed: 18 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -27,67 +27,25 @@ Many items of SQL migration can be applied when migrating PySpark to higher vers
2727
Please refer [Migration Guide: SQL, Datasets and DataFrame](sql-migration-guide.html).
2828

2929
## Upgrading from PySpark 2.4 to 3.0
30+
- In Spark 3.0, PySpark requires a pandas version of 0.23.2 or higher to use pandas related functionality, such as `toPandas`, `createDataFrame` from pandas DataFrame, and so on.
3031

31-
- Since Spark 3.0, PySpark requires a Pandas version of 0.23.2 or higher to use Pandas related functionality, such as `toPandas`, `createDataFrame` from Pandas DataFrame, etc.
32-
33-
- Since Spark 3.0, PySpark requires a PyArrow version of 0.12.1 or higher to use PyArrow related functionality, such as `pandas_udf`, `toPandas` and `createDataFrame` with "spark.sql.execution.arrow.enabled=true", etc.
34-
35-
- In PySpark, when creating a `SparkSession` with `SparkSession.builder.getOrCreate()`, if there is an existing `SparkContext`, the builder was trying to update the `SparkConf` of the existing `SparkContext` with configurations specified to the builder, but the `SparkContext` is shared by all `SparkSession`s, so we should not update them. Since 3.0, the builder comes to not update the configurations. This is the same behavior as Java/Scala API in 2.3 and above. If you want to update them, you need to update them prior to creating a `SparkSession`.
36-
37-
- In PySpark, when Arrow optimization is enabled, if Arrow version is higher than 0.11.0, Arrow can perform safe type conversion when converting Pandas.Series to Arrow array during serialization. Arrow will raise errors when detecting unsafe type conversion like overflow. Setting `spark.sql.execution.pandas.convertToArrowArraySafely` to true can enable it. The default setting is false. PySpark's behavior for Arrow versions is illustrated in the table below:
38-
<table class="table">
39-
<tr>
40-
<th>
41-
<b>PyArrow version</b>
42-
</th>
43-
<th>
44-
<b>Integer Overflow</b>
45-
</th>
46-
<th>
47-
<b>Floating Point Truncation</b>
48-
</th>
49-
</tr>
50-
<tr>
51-
<td>
52-
version < 0.11.0
53-
</td>
54-
<td>
55-
Raise error
56-
</td>
57-
<td>
58-
Silently allows
59-
</td>
60-
</tr>
61-
<tr>
62-
<td>
63-
version > 0.11.0, arrowSafeTypeConversion=false
64-
</td>
65-
<td>
66-
Silent overflow
67-
</td>
68-
<td>
69-
Silently allows
70-
</td>
71-
</tr>
72-
<tr>
73-
<td>
74-
version > 0.11.0, arrowSafeTypeConversion=true
75-
</td>
76-
<td>
77-
Raise error
78-
</td>
79-
<td>
80-
Raise error
81-
</td>
82-
</tr>
83-
</table>
84-
85-
- Since Spark 3.0, `createDataFrame(..., verifySchema=True)` validates `LongType` as well in PySpark. Previously, `LongType` was not verified and resulted in `None` in case the value overflows. To restore this behavior, `verifySchema` can be set to `False` to disable the validation.
86-
87-
- Since Spark 3.0, `Column.getItem` is fixed such that it does not call `Column.apply`. Consequently, if `Column` is used as an argument to `getItem`, the indexing operator should be used.
88-
For example, `map_col.getItem(col('id'))` should be replaced with `map_col[col('id')]`.
89-
90-
- As of Spark 3.0 `Row` field names are no longer sorted alphabetically when constructing with named arguments for Python versions 3.6 and above, and the order of fields will match that as entered. To enable sorted fields by default, as in Spark 2.4, set the environment variable `PYSPARK_ROW_FIELD_SORTING_ENABLED` to "true" for both executors and driver - this environment variable must be consistent on all executors and driver; otherwise, it may cause failures or incorrect answers. For Python versions less than 3.6, the field names will be sorted alphabetically as the only option.
32+
- In Spark 3.0, PySpark requires a PyArrow version of 0.12.1 or higher to use PyArrow related functionality, such as `pandas_udf`, `toPandas` and `createDataFrame` with "spark.sql.execution.arrow.enabled=true", etc.
33+
34+
- In PySpark, when creating a `SparkSession` with `SparkSession.builder.getOrCreate()`, if there is an existing `SparkContext`, the builder was trying to update the `SparkConf` of the existing `SparkContext` with configurations specified to the builder, but the `SparkContext` is shared by all `SparkSession`s, so we should not update them. In 3.0, the builder comes to not update the configurations. This is the same behavior as Java/Scala API in 2.3 and above. If you want to update them, you need to update them prior to creating a `SparkSession`.
35+
36+
- In PySpark, when Arrow optimization is enabled, if Arrow version is higher than 0.11.0, Arrow can perform safe type conversion when converting `pandas.Series` to an Arrow array during serialization. Arrow raises errors when detecting unsafe type conversions like overflow. You enable it by setting `spark.sql.execution.pandas.convertToArrowArraySafely` to `true`. The default setting is `false`. PySpark behavior for Arrow versions is illustrated in the following table:
37+
38+
| PyArrow version | Integer overflow | Floating point truncation |
39+
| ---------------- | ---------------- | ------------------------- |
40+
| 0.11.0 and below | Raise error | Silently allows |
41+
| \> 0.11.0, arrowSafeTypeConversion=false | Silent overflow | Silently allows |
42+
| \> 0.11.0, arrowSafeTypeConversion=true | Raise error | Raise error |
43+
44+
- In Spark 3.0, `createDataFrame(..., verifySchema=True)` validates `LongType` as well in PySpark. Previously, `LongType` was not verified and resulted in `None` in case the value overflows. To restore this behavior, `verifySchema` can be set to `False` to disable the validation.
45+
46+
- In Spark 3.0, `Column.getItem` is fixed such that it does not call `Column.apply`. Consequently, if `Column` is used as an argument to `getItem`, the indexing operator should be used. For example, `map_col.getItem(col('id'))` should be replaced with `map_col[col('id')]`.
47+
48+
- As of Spark 3.0, `Row` field names are no longer sorted alphabetically when constructing with named arguments for Python versions 3.6 and above, and the order of fields will match that as entered. To enable sorted fields by default, as in Spark 2.4, set the environment variable `PYSPARK_ROW_FIELD_SORTING_ENABLED` to `true` for both executors and driver - this environment variable must be consistent on all executors and driver; otherwise, it may cause failures or incorrect answers. For Python versions less than 3.6, the field names will be sorted alphabetically as the only option.
9149

9250
## Upgrading from PySpark 2.3 to 2.4
9351

0 commit comments

Comments
 (0)