Third-Party Hadoop Distributions

Spark can run against all versions of Cloudera’s Distribution Including Apache Hadoop (CDH) and the Hortonworks Data Platform (HDP). There are a few things to keep in mind when using Spark with these distributions:

Compile-time Hadoop Version

When compiling Spark, you’ll need to specify the Hadoop version by defining the hadoop.version property. For certain versions, you will need to specify additional profiles. For more detail, see the guide on building with maven:

mvn -Dhadoop.version=1.0.4 -DskipTests clean package
mvn -Phadoop-2.2 -Dhadoop.version=2.2.0 -DskipTests clean package

The table below lists the corresponding hadoop.version code for each CDH/HDP release. Note that some Hadoop releases are binary compatible across client versions. This means the pre-built Spark distribution may “just work” without you needing to compile. That said, we recommend compiling with the exact Hadoop version you are running to avoid any compatibility errors.

CDH Releases

ReleaseVersion code
CDH 4.X.X (YARN mode)2.0.0-cdh4.X.X
CDH 4.X.X2.0.0-mr1-cdh4.X.X
CDH 3u60.20.2-cdh3u6
CDH 3u50.20.2-cdh3u5
CDH 3u40.20.2-cdh3u4

HDP Releases

ReleaseVersion code

In SBT, the equivalent can be achieved by setting the the hadoop.version property:

sbt/sbt -Dhadoop.version=1.0.4 assembly

Linking Applications to the Hadoop Version

In addition to compiling Spark itself against the right version, you need to add a Maven dependency on that version of hadoop-client to any Spark applications you run, so they can also talk to the HDFS version on the cluster. If you are using CDH, you also need to add the Cloudera Maven repository. This looks as follows in SBT:

libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "<version>"

// If using CDH, also add Cloudera repo
resolvers += "Cloudera Repository" at ""

Or in Maven:


  <!-- If using CDH, also add Cloudera repo -->
      <id>Cloudera repository</id>

Where to Run Spark

As described in the Hardware Provisioning guide, Spark can run in a variety of deployment modes:

These options are identical for those using CDH and HDP.

Inheriting Cluster Configuration

If you plan to read and write from HDFS using Spark, there are two Hadoop configuration files that should be included on Spark’s classpath:

The location of these configuration files varies across CDH and HDP versions, but a common location is inside of /etc/hadoop/conf. Some tools, such as Cloudera Manager, create configurations on-the-fly, but offer a mechanisms to download copies of them.

To make these files visible to Spark, set HADOOP_CONF_DIR in $SPARK_HOME/ to a location containing the configuration files.