Upgrading from PySpark 2.4 to 3.0¶
In Spark 3.0, PySpark requires a pandas version of 0.23.2 or higher to use pandas related functionality, such as
createDataFramefrom pandas DataFrame, and so on.
In Spark 3.0, PySpark requires a PyArrow version of 0.12.1 or higher to use PyArrow related functionality, such as
createDataFramewith “spark.sql.execution.arrow.enabled=true”, etc.
In PySpark, when creating a
SparkSession.builder.getOrCreate(), if there is an existing
SparkContext, the builder was trying to update the
SparkConfof the existing
SparkContextwith configurations specified to the builder, but the
SparkContextis shared by all
SparkSessions, so we should not update them. In 3.0, the builder comes to not update the configurations. This is the same behavior as Java/Scala API in 2.3 and above. If you want to update them, you need to update them prior to creating a
In PySpark, when Arrow optimization is enabled, if Arrow version is higher than 0.11.0, Arrow can perform safe type conversion when converting pandas.Series to an Arrow array during serialization. Arrow raises errors when detecting unsafe type conversions like overflow. You enable it by setting
spark.sql.execution.pandas.convertToArrowArraySafelyto true. The default setting is false. PySpark behavior for Arrow versions is illustrated in the following table:
Floating point truncation
0.11.0 and below
> 0.11.0, arrowSafeTypeConversion=false
> 0.11.0, arrowSafeTypeConversion=true
In Spark 3.0,
createDataFrame(..., verifySchema=True)validates LongType as well in PySpark. Previously, LongType was not verified and resulted in None in case the value overflows. To restore this behavior, verifySchema can be set to False to disable the validation.
As of Spark 3.0,
Rowfield names are no longer sorted alphabetically when constructing with named arguments for Python versions 3.6 and above, and the order of fields will match that as entered. To enable sorted fields by default, as in Spark 2.4, set the environment variable
PYSPARK_ROW_FIELD_SORTING_ENABLEDto true for both executors and driver - this environment variable must be consistent on all executors and driver; otherwise, it may cause failures or incorrect answers. For Python versions less than 3.6, the field names will be sorted alphabetically as the only option.
In Spark 3.0,
pyspark.ml.param.shared.Has*mixins do not provide any
set*(self, value)setter methods anymore, use the respective
self.set(self.*, value)instead. See SPARK-29093 for details.