Python Programming Guide
The Spark Python API (PySpark) exposes the Spark programming model to Python. To learn the basics of Spark, we recommend reading through the Scala programming guide first; it should be easy to follow even if you don’t know Scala. This guide will show how to use the Spark features described there in Python.
Key Differences in the Python API
There are a few key differences between the Python and Scala APIs:
- Python is dynamically typed, so RDDs can hold objects of multiple types.
- PySpark does not yet support a few API calls, such as
sort, and non-text input files, though these will be added in future releases.
In PySpark, RDDs support the same methods as their Scala counterparts but take Python functions and return Python collection types.
Short functions can be passed to RDD methods using Python’s
logData = sc.textFile(logFile).cache() errors = logData.filter(lambda line: "ERROR" in line)
You can also pass functions that are defined with the
def keyword; this is useful for longer functions that can’t be expressed using
def is_error(line): return "ERROR" in line errors = logData.filter(is_error)
Functions can access objects in enclosing scopes, although modifications to those objects within RDD methods will not be propagated back:
error_keywords = ["Exception", "Error"] def is_error(line): return any(keyword in line for keyword in error_keywords) errors = logData.filter(is_error)
PySpark will automatically ship these functions to workers, along with any objects that they reference. Instances of classes will be serialized and shipped to workers by PySpark, but classes themselves cannot be automatically distributed to workers. The Standalone Use section describes how to ship code dependencies to workers.
In addition, PySpark fully supports interactive use—simply run
./pyspark to launch an interactive shell.
Installing and Configuring PySpark
PySpark requires Python 2.6 or higher. PySpark applications are executed using a standard CPython interpreter in order to support Python modules that use C extensions. We have not tested PySpark with Python 3 or with alternative Python interpreters, such as PyPy or Jython.
By default, PySpark requires
python to be available on the system
PATH and use it to run programs; an alternate Python executable may be specified by setting the
PYSPARK_PYTHON environment variable in
.cmd on Windows).
All of PySpark’s library dependencies, including Py4J, are bundled with PySpark and automatically imported.
Standalone PySpark applications should be run using the
pyspark script, which automatically configures the Java and Python environment using the settings in
The script automatically adds the
pyspark package to the
pyspark script launches a Python interpreter that is configured to run PySpark applications. To use
pyspark interactively, first build Spark, then launch it directly from the command line without any options:
$ sbt/sbt assembly $ ./pyspark
The Python shell can be used explore data interactively and is a simple way to learn the API:
>>> words = sc.textFile("/usr/share/dict/words") >>> words.filter(lambda w: w.startswith("spar")).take(5) [u'spar', u'sparable', u'sparada', u'sparadrap', u'sparagrass'] >>> help(pyspark) # Show all pyspark functions
By default, the
pyspark shell creates SparkContext that runs applications locally on a single core.
To connect to a non-local cluster, or use multiple cores, set the
MASTER environment variable.
For example, to use the
pyspark shell with a standalone Spark cluster:
$ MASTER=spark://IP:PORT ./pyspark
Or, to use four cores on the local machine:
$ MASTER=local ./pyspark
It is also possible to launch PySpark in IPython, the enhanced Python interpreter.
To do this, set the
IPYTHON variable to
1 when running
$ IPYTHON=1 ./pyspark
Alternatively, you can customize the
ipython command by setting
IPYTHON_OPTS. For example, to launch
the IPython Notebook with PyLab graphing support:
$ IPYTHON_OPTS="notebook --pylab inline" ./pyspark
IPython also works on a cluster or on multiple cores if you set the
MASTER environment variable.
PySpark can also be used from standalone Python scripts by creating a SparkContext in your script and running the script using
The Quick Start guide includes a complete example of a standalone Python application.
Code dependencies can be deployed by listing them in the
pyFiles option in the SparkContext constructor:
from pyspark import SparkContext sc = SparkContext("local", "App Name", pyFiles=['MyFile.py', 'lib.zip', 'app.egg'])
Files listed here will be added to the
PYTHONPATH and shipped to remote worker machines.
Code dependencies can be added to an existing SparkContext using its
Where to Go from Here
PySpark also includes several sample programs in the
You can run them by passing the files to
Each program prints usage help when run without arguments.