Spark Overview

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Scala, Java, and Python that make parallel jobs easy to write, and an optimized engine that supports general computation graphs. It also supports a rich set of higher-level tools including Shark (Hive on Spark), MLlib for machine learning, GraphX for graph processing, and Spark Streaming.


Get Spark by visiting the downloads page of the Apache Spark site. This documentation is for Spark version 0.9.0-incubating.

Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run it is to have java to installed on your system PATH, or the JAVA_HOME environment variable pointing to a Java installation.


Spark uses Simple Build Tool, which is bundled with it. To compile the code, go into the top-level Spark directory and run

sbt/sbt assembly

For its Scala API, Spark 0.9.0-incubating depends on Scala 2.10. If you write applications in Scala, you will need to use a compatible Scala version (e.g. 2.10.X) – newer major versions may not work. You can get the right version of Scala from

Running the Examples and Shell

Spark comes with several sample programs in the examples directory. To run one of the samples, use ./bin/run-example <class> <params> in the top-level Spark directory (the bin/run-example script sets up the appropriate paths and launches that program). For example, try ./bin/run-example org.apache.spark.examples.SparkPi local. Each example prints usage help when run with no parameters.

Note that all of the sample programs take a <master> parameter specifying the cluster URL to connect to. This can be a URL for a distributed cluster, or local to run locally with one thread, or local[N] to run locally with N threads. You should start by using local for testing.

Finally, you can run Spark interactively through modified versions of the Scala shell (./bin/spark-shell) or Python interpreter (./bin/pyspark). These are a great way to learn the framework.

Launching on a Cluster

The Spark cluster mode overview explains the key concepts in running on a cluster. Spark can run both by itself, or over several existing cluster managers. It currently provides several options for deployment:

A Note About Hadoop Versions

Spark uses the Hadoop-client library to talk to HDFS and other Hadoop-supported storage systems. Because the HDFS protocol has changed in different versions of Hadoop, you must build Spark against the same version that your cluster uses. By default, Spark links to Hadoop 1.0.4. You can change this by setting the SPARK_HADOOP_VERSION variable when compiling:

SPARK_HADOOP_VERSION=2.2.0 sbt/sbt assembly

In addition, if you wish to run Spark on YARN, set SPARK_YARN to true:

SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

Note that on Windows, you need to set the environment variables on separate lines, e.g., set SPARK_HADOOP_VERSION=1.2.1.

For this version of Spark (0.8.1) Hadoop 2.2.x (or newer) users will have to build Spark and publish it locally. See Launching Spark on YARN. This is needed because Hadoop 2.2 has non backwards compatible API changes.

Where to Go from Here

Programming guides:

API Docs:

Deployment guides:

Other documents:

External resources:


To get help using Spark or keep up with Spark development, sign up for the user mailing list.

If you’re in the San Francisco Bay Area, there’s a regular Spark meetup every few weeks. Come by to meet the developers and other users.

Finally, if you’d like to contribute code to Spark, read how to contribute.