Migration Guide: Spark Core
Upgrading from Core 3.0 to 3.1
In Spark 3.0 and below,
SparkContextcan be created in executors. Since Spark 3.1, an exception will be thrown when creating
SparkContextin executors. You can allow it by setting the configuration
In Spark 3.0 and below, Spark propagated the Hadoop classpath from
mapreduce.application.classpathinto the Spark application submitted to YARN when Spark distribution is with the built-in Hadoop. Since Spark 3.1, it does not propagate anymore when the Spark distribution is with the built-in Hadoop in order to prevent the failure from the different transitive dependencies picked up from the Hadoop cluster such as Guava and Jackson. To restore the behavior before Spark 3.1, you can set
Upgrading from Core 2.4 to 3.0
org.apache.spark.ExecutorPlugininterface and related configuration has been replaced with
org.apache.spark.api.plugin.SparkPlugin, which adds new functionality. Plugins using the old interface must be modified to extend the new interfaces. Check the Monitoring guide for more details.
TaskContext.isRunningLocallyhas been removed. Local execution was removed and it always has returned
ShuffleWriteMetricshave been removed. Instead, use
AccumulableInfo.applyhave been removed because creating
Deprecated accumulator v1 APIs have been removed and please use v2 APIs instead.
Event log file will be written as UTF-8 encoding, and Spark History Server will replay event log files as UTF-8 encoding. Previously Spark wrote the event log file as default charset of driver JVM process, so Spark History Server of Spark 2.x is needed to read the old event log files in case of incompatible encoding.
A new protocol for fetching shuffle blocks is used. It’s recommended that external shuffle services be upgraded when running Spark 3.0 apps. You can still use old external shuffle services by setting the configuration
true. Otherwise, Spark may run into errors with messages like
IllegalArgumentException: Unexpected message type: <number>.
SPARK_WORKER_INSTANCESis deprecated in Standalone mode. It’s recommended to launch multiple executors in one worker and launch one worker per node instead of launching multiple workers per node and launching one executor per worker.