You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/core-migration-guide.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ license: |
26
26
27
27
- The `org.apache.spark.ExecutorPlugin` interface and related configuration has been replaced with
28
28
`org.apache.spark.plugin.SparkPlugin`, which adds new functionality. Plugins using the old
29
-
interface need to be modified to extend the new interfaces. Check the
29
+
interface must be modified to extend the new interfaces. Check the
30
30
[Monitoring](monitoring.html) guide for more details.
31
31
32
32
- Deprecated method `TaskContext.isRunningLocally` has been removed. Local execution was removed and it always has returned `false`.
@@ -35,6 +35,6 @@ license: |
35
35
36
36
- Deprecated method `AccumulableInfo.apply` have been removed because creating `AccumulableInfo` is disallowed.
37
37
38
-
- Event log file will be written as UTF-8 encoding, and Spark History Server will replay event log files as UTF-8 encoding. Previously Spark writes event log file as default charset of driver JVM process, so Spark History Server of Spark 2.x is needed to read the old event log files in case of incompatible encoding.
38
+
- Event log file will be written as UTF-8 encoding, and Spark History Server will replay event log files as UTF-8 encoding. Previously Spark wrote the event log file as default charset of driver JVM process, so Spark History Server of Spark 2.x is needed to read the old event log files in case of incompatible encoding.
39
39
40
-
- A new protocol for fetching shuffle blocks is used. It's recommended that external shuffle services be upgraded when running Spark 3.0 apps. Old external shuffle services can still be used by setting the configuration `spark.shuffle.useOldFetchProtocol` to `true`. Otherwise, Spark may run into errors with messages like `IllegalArgumentException: Unexpected message type: <number>`.
40
+
- A new protocol for fetching shuffle blocks is used. It's recommended that external shuffle services be upgraded when running Spark 3.0 apps. You can still use old external shuffle services by setting the configuration `spark.shuffle.useOldFetchProtocol` to `true`. Otherwise, Spark may run into errors with messages like `IllegalArgumentException: Unexpected message type: <number>`.
Copy file name to clipboardExpand all lines: docs/pyspark-migration-guide.md
+18-60Lines changed: 18 additions & 60 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,67 +27,25 @@ Many items of SQL migration can be applied when migrating PySpark to higher vers
27
27
Please refer [Migration Guide: SQL, Datasets and DataFrame](sql-migration-guide.html).
28
28
29
29
## Upgrading from PySpark 2.4 to 3.0
30
+
- In Spark 3.0, PySpark requires a pandas version of 0.23.2 or higher to use pandas related functionality, such as `toPandas`, `createDataFrame` from pandas DataFrame, and so on.
30
31
31
-
- Since Spark 3.0, PySpark requires a Pandas version of 0.23.2 or higher to use Pandas related functionality, such as `toPandas`, `createDataFrame` from Pandas DataFrame, etc.
32
-
33
-
- Since Spark 3.0, PySpark requires a PyArrow version of 0.12.1 or higher to use PyArrow related functionality, such as `pandas_udf`, `toPandas` and `createDataFrame` with "spark.sql.execution.arrow.enabled=true", etc.
34
-
35
-
- In PySpark, when creating a `SparkSession` with `SparkSession.builder.getOrCreate()`, if there is an existing `SparkContext`, the builder was trying to update the `SparkConf` of the existing `SparkContext` with configurations specified to the builder, but the `SparkContext` is shared by all `SparkSession`s, so we should not update them. Since 3.0, the builder comes to not update the configurations. This is the same behavior as Java/Scala API in 2.3 and above. If you want to update them, you need to update them prior to creating a `SparkSession`.
36
-
37
-
- In PySpark, when Arrow optimization is enabled, if Arrow version is higher than 0.11.0, Arrow can perform safe type conversion when converting Pandas.Series to Arrow array during serialization. Arrow will raise errors when detecting unsafe type conversion like overflow. Setting `spark.sql.execution.pandas.convertToArrowArraySafely` to true can enable it. The default setting is false. PySpark's behavior for Arrow versions is illustrated in the table below:
38
-
<tableclass="table">
39
-
<tr>
40
-
<th>
41
-
<b>PyArrow version</b>
42
-
</th>
43
-
<th>
44
-
<b>Integer Overflow</b>
45
-
</th>
46
-
<th>
47
-
<b>Floating Point Truncation</b>
48
-
</th>
49
-
</tr>
50
-
<tr>
51
-
<td>
52
-
version < 0.11.0
53
-
</td>
54
-
<td>
55
-
Raise error
56
-
</td>
57
-
<td>
58
-
Silently allows
59
-
</td>
60
-
</tr>
61
-
<tr>
62
-
<td>
63
-
version > 0.11.0, arrowSafeTypeConversion=false
64
-
</td>
65
-
<td>
66
-
Silent overflow
67
-
</td>
68
-
<td>
69
-
Silently allows
70
-
</td>
71
-
</tr>
72
-
<tr>
73
-
<td>
74
-
version > 0.11.0, arrowSafeTypeConversion=true
75
-
</td>
76
-
<td>
77
-
Raise error
78
-
</td>
79
-
<td>
80
-
Raise error
81
-
</td>
82
-
</tr>
83
-
</table>
84
-
85
-
- Since Spark 3.0, `createDataFrame(..., verifySchema=True)` validates `LongType` as well in PySpark. Previously, `LongType` was not verified and resulted in `None` in case the value overflows. To restore this behavior, `verifySchema` can be set to `False` to disable the validation.
86
-
87
-
- Since Spark 3.0, `Column.getItem` is fixed such that it does not call `Column.apply`. Consequently, if `Column` is used as an argument to `getItem`, the indexing operator should be used.
88
-
For example, `map_col.getItem(col('id'))` should be replaced with `map_col[col('id')]`.
89
-
90
-
- As of Spark 3.0 `Row` field names are no longer sorted alphabetically when constructing with named arguments for Python versions 3.6 and above, and the order of fields will match that as entered. To enable sorted fields by default, as in Spark 2.4, set the environment variable `PYSPARK_ROW_FIELD_SORTING_ENABLED` to "true" for both executors and driver - this environment variable must be consistent on all executors and driver; otherwise, it may cause failures or incorrect answers. For Python versions less than 3.6, the field names will be sorted alphabetically as the only option.
32
+
- In Spark 3.0, PySpark requires a PyArrow version of 0.12.1 or higher to use PyArrow related functionality, such as `pandas_udf`, `toPandas` and `createDataFrame` with "spark.sql.execution.arrow.enabled=true", etc.
33
+
34
+
- In PySpark, when creating a `SparkSession` with `SparkSession.builder.getOrCreate()`, if there is an existing `SparkContext`, the builder was trying to update the `SparkConf` of the existing `SparkContext` with configurations specified to the builder, but the `SparkContext` is shared by all `SparkSession`s, so we should not update them. In 3.0, the builder comes to not update the configurations. This is the same behavior as Java/Scala API in 2.3 and above. If you want to update them, you need to update them prior to creating a `SparkSession`.
35
+
36
+
- In PySpark, when Arrow optimization is enabled, if Arrow version is higher than 0.11.0, Arrow can perform safe type conversion when converting `pandas.Series` to an Arrow array during serialization. Arrow raises errors when detecting unsafe type conversions like overflow. You enable it by setting `spark.sql.execution.pandas.convertToArrowArraySafely` to `true`. The default setting is `false`. PySpark behavior for Arrow versions is illustrated in the following table:
37
+
38
+
| PyArrow version | Integer overflow | Floating point truncation |
- In Spark 3.0, `createDataFrame(..., verifySchema=True)` validates `LongType` as well in PySpark. Previously, `LongType` was not verified and resulted in `None` in case the value overflows. To restore this behavior, `verifySchema` can be set to `False` to disable the validation.
45
+
46
+
- In Spark 3.0, `Column.getItem` is fixed such that it does not call `Column.apply`. Consequently, if `Column` is used as an argument to `getItem`, the indexing operator should be used. For example, `map_col.getItem(col('id'))` should be replaced with `map_col[col('id')]`.
47
+
48
+
- As of Spark 3.0, `Row` field names are no longer sorted alphabetically when constructing with named arguments for Python versions 3.6 and above, and the order of fields will match that as entered. To enable sorted fields by default, as in Spark 2.4, set the environment variable `PYSPARK_ROW_FIELD_SORTING_ENABLED` to `true` for both executors and driver - this environment variable must be consistent on all executors and driver; otherwise, it may cause failures or incorrect answers. For Python versions less than 3.6, the field names will be sorted alphabetically as the only option.
0 commit comments