Package pyspark :: Module context :: Class SparkContext
[frames] | no frames]

Class SparkContext

source code

object --+
         |
        SparkContext

Main entry point for Spark functionality. A SparkContext represents the connection to a Spark cluster, and can be used to create RDDs and broadcast variables on that cluster.

Instance Methods
 
__init__(self, master=None, appName=None, sparkHome=None, pyFiles=None, environment=None, batchSize=1024, serializer=PickleSerializer(), conf=None, gateway=None)
Create a new SparkContext.
source code
 
version(self)
The version of Spark on which this application is running.
source code
 
defaultParallelism(self)
Default level of parallelism to use when not given by user (e.g.
source code
 
defaultMinPartitions(self)
Default min number of partitions for Hadoop RDDs when not given by user
source code
 
stop(self)
Shut down the SparkContext.
source code
 
parallelize(self, c, numSlices=None)
Distribute a local Python collection to form an RDD.
source code
 
pickleFile(self, name, minPartitions=None)
Load an RDD previously saved using RDD.saveAsPickleFile method.
source code
 
textFile(self, name, minPartitions=None)
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
source code
 
wholeTextFiles(self, path, minPartitions=None)
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
source code
 
sequenceFile(self, path, keyClass=None, valueClass=None, keyConverter=None, valueConverter=None, minSplits=None, batchSize=None)
Read a Hadoop SequenceFile with arbitrary key and value Writable class from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
source code
 
newAPIHadoopFile(self, path, inputFormatClass, keyClass, valueClass, keyConverter=None, valueConverter=None, conf=None, batchSize=None)
Read a 'new API' Hadoop InputFormat with arbitrary key and value class from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
source code
 
newAPIHadoopRDD(self, inputFormatClass, keyClass, valueClass, keyConverter=None, valueConverter=None, conf=None, batchSize=None)
Read a 'new API' Hadoop InputFormat with arbitrary key and value class, from an arbitrary Hadoop configuration, which is passed in as a Python dict.
source code
 
hadoopFile(self, path, inputFormatClass, keyClass, valueClass, keyConverter=None, valueConverter=None, conf=None, batchSize=None)
Read an 'old' Hadoop InputFormat with arbitrary key and value class from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
source code
 
hadoopRDD(self, inputFormatClass, keyClass, valueClass, keyConverter=None, valueConverter=None, conf=None, batchSize=None)
Read an 'old' Hadoop InputFormat with arbitrary key and value class, from an arbitrary Hadoop configuration, which is passed in as a Python dict.
source code
 
union(self, rdds)
Build the union of a list of RDDs.
source code
 
broadcast(self, value)
Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions.
source code
 
accumulator(self, value, accum_param=None)
Create an Accumulator with the given initial value, using a given AccumulatorParam helper object to define how to add values of the data type if provided.
source code
 
addFile(self, path)
Add a file to be downloaded with this Spark job on every node.
source code
 
clearFiles(self)
Clear the job's list of files added by addFile or addPyFile so that they do not get downloaded to any new nodes.
source code
 
addPyFile(self, path)
Add a .py or .zip dependency for all tasks to be executed on this SparkContext in the future.
source code
 
setCheckpointDir(self, dirName)
Set the directory under which RDDs are going to be checkpointed.
source code
 
setJobGroup(self, groupId, description, interruptOnCancel=False)
Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.
source code
 
setLocalProperty(self, key, value)
Set a local property that affects jobs submitted from this thread, such as the Spark fair scheduler pool.
source code
 
getLocalProperty(self, key)
Get a local property set in this thread, or null if it is missing.
source code
 
sparkUser(self)
Get SPARK_USER for user who is running SparkContext.
source code
 
cancelJobGroup(self, groupId)
Cancel active jobs for the specified group.
source code
 
cancelAllJobs(self)
Cancel all jobs that have been scheduled or are running.
source code
 
runJob(self, rdd, partitionFunc, partitions=None, allowLocal=False)
Executes the given partitionFunc on the specified set of partitions, returning the result as an array of elements.
source code

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

Class Methods
 
setSystemProperty(cls, key, value)
Set a Java system property, such as spark.executor.memory.
source code
Properties

Inherited from object: __class__

Method Details

__init__(self, master=None, appName=None, sparkHome=None, pyFiles=None, environment=None, batchSize=1024, serializer=PickleSerializer(), conf=None, gateway=None)
(Constructor)

source code 

Create a new SparkContext. At least the master and app name should be set,
either through the named parameters here or through C{conf}.

@param master: Cluster URL to connect to
       (e.g. mesos://host:port, spark://host:port, local[4]).
@param appName: A name for your job, to display on the cluster web UI.
@param sparkHome: Location where Spark is installed on cluster nodes.
@param pyFiles: Collection of .zip or .py files to send to the cluster
       and add to PYTHONPATH.  These can be paths on the local file
       system or HDFS, HTTP, HTTPS, or FTP URLs.
@param environment: A dictionary of environment variables to set on
       worker nodes.
@param batchSize: The number of Python objects represented as a single
       Java object.  Set 1 to disable batching or -1 to use an
       unlimited batch size.
@param serializer: The serializer for RDDs.
@param conf: A L{SparkConf} object setting Spark properties.
@param gateway: Use an existing gateway and JVM, otherwise a new JVM
       will be instantiated.


>>> from pyspark.context import SparkContext
>>> sc = SparkContext('local', 'test')

>>> sc2 = SparkContext('local', 'test2') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
    ...
ValueError:...

Overrides: object.__init__

setSystemProperty(cls, key, value)
Class Method

source code 

Set a Java system property, such as spark.executor.memory. This must must be invoked before instantiating SparkContext.

version(self)

source code 

The version of Spark on which this application is running.

Decorators:
  • @property

defaultParallelism(self)

source code 

Default level of parallelism to use when not given by user (e.g. for reduce tasks)

Decorators:
  • @property

defaultMinPartitions(self)

source code 

Default min number of partitions for Hadoop RDDs when not given by user

Decorators:
  • @property

parallelize(self, c, numSlices=None)

source code 

Distribute a local Python collection to form an RDD.

>>> sc.parallelize(range(5), 5).glom().collect()
[[0], [1], [2], [3], [4]]

pickleFile(self, name, minPartitions=None)

source code 

Load an RDD previously saved using RDD.saveAsPickleFile method.

>>> tmpFile = NamedTemporaryFile(delete=True)
>>> tmpFile.close()
>>> sc.parallelize(range(10)).saveAsPickleFile(tmpFile.name, 5)
>>> sorted(sc.pickleFile(tmpFile.name, 3).collect())
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

textFile(self, name, minPartitions=None)

source code 

Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.

>>> path = os.path.join(tempdir, "sample-text.txt")
>>> with open(path, "w") as testFile:
...    testFile.write("Hello world!")
>>> textFile = sc.textFile(path)
>>> textFile.collect()
[u'Hello world!']

wholeTextFiles(self, path, minPartitions=None)

source code 

Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.

For example, if you have the following files:

 hdfs://a-hdfs-path/part-00000
 hdfs://a-hdfs-path/part-00001
 ...
 hdfs://a-hdfs-path/part-nnnnn

Do rdd = sparkContext.wholeTextFiles("hdfs://a-hdfs-path"), then rdd contains:

 (a-hdfs-path/part-00000, its content)
 (a-hdfs-path/part-00001, its content)
 ...
 (a-hdfs-path/part-nnnnn, its content)

NOTE: Small files are preferred, as each file will be loaded fully in memory.

>>> dirPath = os.path.join(tempdir, "files")
>>> os.mkdir(dirPath)
>>> with open(os.path.join(dirPath, "1.txt"), "w") as file1:
...    file1.write("1")
>>> with open(os.path.join(dirPath, "2.txt"), "w") as file2:
...    file2.write("2")
>>> textFiles = sc.wholeTextFiles(dirPath)
>>> sorted(textFiles.collect())
[(u'.../1.txt', u'1'), (u'.../2.txt', u'2')]

sequenceFile(self, path, keyClass=None, valueClass=None, keyConverter=None, valueConverter=None, minSplits=None, batchSize=None)

source code 

Read a Hadoop SequenceFile with arbitrary key and value Writable class from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. The mechanism is as follows:

  1. A Java RDD is created from the SequenceFile or other InputFormat, and the key and value Writable classes
  2. Serialization is attempted via Pyrolite pickling
  3. If this fails, the fallback is to call 'toString' on each key and value
  4. PickleSerializer is used to deserialize pickled objects on the Python side
Parameters:
  • path - path to sequncefile
  • keyClass - fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
  • valueClass - fully qualified classname of value Writable class (e.g. "org.apache.hadoop.io.LongWritable")
  • keyConverter
  • valueConverter
  • minSplits - minimum splits in dataset (default min(2, sc.defaultParallelism))
  • batchSize - The number of Python objects represented as a single Java object. (default sc._default_batch_size_for_serialized_input)

newAPIHadoopFile(self, path, inputFormatClass, keyClass, valueClass, keyConverter=None, valueConverter=None, conf=None, batchSize=None)

source code 

Read a 'new API' Hadoop InputFormat with arbitrary key and value class from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. The mechanism is the same as for sc.sequenceFile.

A Hadoop configuration can be passed in as a Python dict. This will be converted into a Configuration in Java

Parameters:
  • path - path to Hadoop file
  • inputFormatClass - fully qualified classname of Hadoop InputFormat (e.g. "org.apache.hadoop.mapreduce.lib.input.TextInputFormat")
  • keyClass - fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
  • valueClass - fully qualified classname of value Writable class (e.g. "org.apache.hadoop.io.LongWritable")
  • keyConverter - (None by default)
  • valueConverter - (None by default)
  • conf - Hadoop configuration, passed in as a dict (None by default)
  • batchSize - The number of Python objects represented as a single Java object. (default sc._default_batch_size_for_serialized_input)

newAPIHadoopRDD(self, inputFormatClass, keyClass, valueClass, keyConverter=None, valueConverter=None, conf=None, batchSize=None)

source code 

Read a 'new API' Hadoop InputFormat with arbitrary key and value class, from an arbitrary Hadoop configuration, which is passed in as a Python dict. This will be converted into a Configuration in Java. The mechanism is the same as for sc.sequenceFile.

Parameters:
  • inputFormatClass - fully qualified classname of Hadoop InputFormat (e.g. "org.apache.hadoop.mapreduce.lib.input.TextInputFormat")
  • keyClass - fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
  • valueClass - fully qualified classname of value Writable class (e.g. "org.apache.hadoop.io.LongWritable")
  • keyConverter - (None by default)
  • valueConverter - (None by default)
  • conf - Hadoop configuration, passed in as a dict (None by default)
  • batchSize - The number of Python objects represented as a single Java object. (default sc._default_batch_size_for_serialized_input)

hadoopFile(self, path, inputFormatClass, keyClass, valueClass, keyConverter=None, valueConverter=None, conf=None, batchSize=None)

source code 

Read an 'old' Hadoop InputFormat with arbitrary key and value class from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. The mechanism is the same as for sc.sequenceFile.

A Hadoop configuration can be passed in as a Python dict. This will be converted into a Configuration in Java.

Parameters:
  • path - path to Hadoop file
  • inputFormatClass - fully qualified classname of Hadoop InputFormat (e.g. "org.apache.hadoop.mapred.TextInputFormat")
  • keyClass - fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
  • valueClass - fully qualified classname of value Writable class (e.g. "org.apache.hadoop.io.LongWritable")
  • keyConverter - (None by default)
  • valueConverter - (None by default)
  • conf - Hadoop configuration, passed in as a dict (None by default)
  • batchSize - The number of Python objects represented as a single Java object. (default sc._default_batch_size_for_serialized_input)

hadoopRDD(self, inputFormatClass, keyClass, valueClass, keyConverter=None, valueConverter=None, conf=None, batchSize=None)

source code 

Read an 'old' Hadoop InputFormat with arbitrary key and value class, from an arbitrary Hadoop configuration, which is passed in as a Python dict. This will be converted into a Configuration in Java. The mechanism is the same as for sc.sequenceFile.

Parameters:
  • inputFormatClass - fully qualified classname of Hadoop InputFormat (e.g. "org.apache.hadoop.mapred.TextInputFormat")
  • keyClass - fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
  • valueClass - fully qualified classname of value Writable class (e.g. "org.apache.hadoop.io.LongWritable")
  • keyConverter - (None by default)
  • valueConverter - (None by default)
  • conf - Hadoop configuration, passed in as a dict (None by default)
  • batchSize - The number of Python objects represented as a single Java object. (default sc._default_batch_size_for_serialized_input)

union(self, rdds)

source code 

Build the union of a list of RDDs.

This supports unions() of RDDs with different serialized formats, although this forces them to be reserialized using the default serializer:

>>> path = os.path.join(tempdir, "union-text.txt")
>>> with open(path, "w") as testFile:
...    testFile.write("Hello")
>>> textFile = sc.textFile(path)
>>> textFile.collect()
[u'Hello']
>>> parallelized = sc.parallelize(["World!"])
>>> sorted(sc.union([textFile, parallelized]).collect())
[u'Hello', 'World!']

broadcast(self, value)

source code 

Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions. The variable will be sent to each cluster only once.

accumulator(self, value, accum_param=None)

source code 

Create an Accumulator with the given initial value, using a given AccumulatorParam helper object to define how to add values of the data type if provided. Default AccumulatorParams are used for integers and floating-point numbers if you do not provide one. For other types, a custom AccumulatorParam can be used.

addFile(self, path)

source code 

Add a file to be downloaded with this Spark job on every node. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI.

To access the file in Spark jobs, use SparkFiles.get(path) to find its download location.

>>> from pyspark import SparkFiles
>>> path = os.path.join(tempdir, "test.txt")
>>> with open(path, "w") as testFile:
...    testFile.write("100")
>>> sc.addFile(path)
>>> def func(iterator):
...    with open(SparkFiles.get("test.txt")) as testFile:
...        fileVal = int(testFile.readline())
...        return [x * fileVal for x in iterator]
>>> sc.parallelize([1, 2, 3, 4]).mapPartitions(func).collect()
[100, 200, 300, 400]

addPyFile(self, path)

source code 

Add a .py or .zip dependency for all tasks to be executed on this SparkContext in the future. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI.

setCheckpointDir(self, dirName)

source code 

Set the directory under which RDDs are going to be checkpointed. The directory must be a HDFS path if running on a cluster.

setJobGroup(self, groupId, description, interruptOnCancel=False)

source code 

Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.

Often, a unit of execution in an application consists of multiple Spark actions or jobs. Application programmers can use this method to group all those jobs together and give a group description. Once set, the Spark web UI will associate such jobs with this group.

The application can use SparkContext.cancelJobGroup to cancel all running jobs in this group.

>>> import thread, threading
>>> from time import sleep
>>> result = "Not Set"
>>> lock = threading.Lock()
>>> def map_func(x):
...     sleep(100)
...     raise Exception("Task should have been cancelled")
>>> def start_job(x):
...     global result
...     try:
...         sc.setJobGroup("job_to_cancel", "some description")
...         result = sc.parallelize(range(x)).map(map_func).collect()
...     except Exception as e:
...         result = "Cancelled"
...     lock.release()
>>> def stop_job():
...     sleep(5)
...     sc.cancelJobGroup("job_to_cancel")
>>> supress = lock.acquire()
>>> supress = thread.start_new_thread(start_job, (10,))
>>> supress = thread.start_new_thread(stop_job, tuple())
>>> supress = lock.acquire()
>>> print result
Cancelled

If interruptOnCancel is set to true for the job group, then job cancellation will result in Thread.interrupt() being called on the job's executor threads. This is useful to help ensure that the tasks are actually stopped in a timely manner, but is off by default due to HDFS-1208, where HDFS may respond to Thread.interrupt() by marking nodes as dead.

getLocalProperty(self, key)

source code 

Get a local property set in this thread, or null if it is missing. See setLocalProperty

cancelJobGroup(self, groupId)

source code 

Cancel active jobs for the specified group. See SparkContext.setJobGroup for more information.

runJob(self, rdd, partitionFunc, partitions=None, allowLocal=False)

source code 

Executes the given partitionFunc on the specified set of partitions, returning the result as an array of elements.

If 'partitions' is not specified, this will run over all partitions.

>>> myRDD = sc.parallelize(range(6), 3)
>>> sc.runJob(myRDD, lambda part: [x * x for x in part])
[0, 1, 4, 9, 16, 25]
>>> myRDD = sc.parallelize(range(6), 3)
>>> sc.runJob(myRDD, lambda part: [x * x for x in part], [0, 2], True)
[0, 1, 16, 25]