Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala and Python, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.
Get Spark from the downloads page of the project website. This documentation is for Spark version 1.1.1. The downloads page contains Spark packages for many popular HDFS versions. If you’d like to build Spark from scratch, visit building Spark with Maven.
Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). It’s easy to run
locally on one machine — all you need is to have
java installed on your system
JAVA_HOME environment variable pointing to a Java installation.
Spark runs on Java 6+ and Python 2.6+. For the Scala API, Spark 1.1.1 uses Scala 2.10. You will need to use a compatible Scala version (2.10.x).
Running the Examples and Shell
Spark comes with several sample programs. Scala, Java and Python examples are in the
examples/src/main directory. To run one of the Java or Scala sample programs, use
bin/run-example <class> [params] in the top-level Spark directory. (Behind the scenes, this
invokes the more general
spark-submit script for
launching applications). For example,
./bin/run-example SparkPi 10
You can also run Spark interactively through a modified version of the Scala shell. This is a great way to learn the framework.
./bin/spark-shell --master local
--master option specifies the
master URL for a distributed cluster, or
local to run
locally with one thread, or
local[N] to run locally with N threads. You should start by using
local for testing. For a full list of options, run Spark shell with the
Spark also provides a Python API. To run Spark interactively in a Python interpreter, use
./bin/pyspark --master local
Example applications are also provided in Python. For example,
./bin/spark-submit examples/src/main/python/pi.py 10
Launching on a Cluster
The Spark cluster mode overview explains the key concepts in running on a cluster. Spark can run both by itself, or over several existing cluster managers. It currently provides several options for deployment:
- Amazon EC2: our EC2 scripts let you launch a cluster in about 5 minutes
- Standalone Deploy Mode: simplest way to deploy Spark on a private cluster
- Apache Mesos
- Hadoop YARN
Where to Go from Here
- Quick Start: a quick introduction to the Spark API; start here!
- Spark Programming Guide: detailed overview of Spark in all supported languages (Scala, Java, Python)
- Modules built on Spark:
- Cluster Overview: overview of concepts and components when running on a cluster
- Submitting Applications: packaging and deploying applications
- Deployment modes:
- Configuration: customize Spark via its configuration system
- Monitoring: track the behavior of your applications
- Tuning Guide: best practices to optimize performance and memory use
- Job Scheduling: scheduling resources across and within Spark applications
- Security: Spark security support
- Hardware Provisioning: recommendations for cluster hardware
- 3rd Party Hadoop Distributions: using common Hadoop distributions
- Integration with other storage systems:
- Building Spark with Maven: build Spark using the Maven system
- Contributing to Spark
- Spark Homepage
- Mailing Lists: ask questions about Spark here
- AMP Camps: a series of training camps at UC Berkeley that featured talks and exercises about Spark, Spark Streaming, Mesos, and more. Videos, slides and exercises are available online for free.
- Code Examples: more are also available in the
examplessubfolder of Spark (Scala, Java, Python)
To get help using Spark or keep up with Spark development, sign up for the user mailing list.
If you’re in the San Francisco Bay Area, there’s a regular Spark meetup every few weeks. Come by to meet the developers and other users.
Finally, if you’d like to contribute code to Spark, read how to contribute.