Package pyspark :: Module context :: Class SparkContext
[frames] | no frames]

Class SparkContext

source code

object --+
         |
        SparkContext

Main entry point for Spark functionality. A SparkContext represents the connection to a Spark cluster, and can be used to create RDDs and broadcast variables on that cluster.

Instance Methods
 
__init__(self, master, jobName, sparkHome=None, pyFiles=None, environment=None, batchSize=1024)
Create a new SparkContext.
source code
 
defaultParallelism(self)
Default level of parallelism to use when not given by user (e.g.
source code
 
__del__(self) source code
 
stop(self)
Shut down the SparkContext.
source code
 
parallelize(self, c, numSlices=None)
Distribute a local Python collection to form an RDD.
source code
 
textFile(self, name, minSplits=None)
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
source code
 
union(self, rdds)
Build the union of a list of RDDs.
source code
 
broadcast(self, value)
Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions.
source code
 
accumulator(self, value, accum_param=None)
Create an Accumulator with the given initial value, using a given AccumulatorParam helper object to define how to add values of the data type if provided.
source code
 
addFile(self, path)
Add a file to be downloaded with this Spark job on every node.
source code
 
clearFiles(self)
Clear the job's list of files added by addFile or addPyFile so that they do not get downloaded to any new nodes.
source code
 
addPyFile(self, path)
Add a .py or .zip dependency for all tasks to be executed on this SparkContext in the future.
source code
 
setCheckpointDir(self, dirName, useExisting=False)
Set the directory under which RDDs are going to be checkpointed.
source code

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

Properties

Inherited from object: __class__

Method Details

__init__(self, master, jobName, sparkHome=None, pyFiles=None, environment=None, batchSize=1024)
(Constructor)

source code 

Create a new SparkContext.

Parameters:
  • master - Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
  • jobName - A name for your job, to display on the cluster web UI
  • sparkHome - Location where Spark is installed on cluster nodes.
  • pyFiles - Collection of .zip or .py files to send to the cluster and add to PYTHONPATH. These can be paths on the local file system or HDFS, HTTP, HTTPS, or FTP URLs.
  • environment - A dictionary of environment variables to set on worker nodes.
  • batchSize - The number of Python objects represented as a single Java object. Set 1 to disable batching or -1 to use an unlimited batch size.
Overrides: object.__init__

defaultParallelism(self)

source code 

Default level of parallelism to use when not given by user (e.g. for reduce tasks)

Decorators:
  • @property

broadcast(self, value)

source code 

Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions. The variable will be sent to each cluster only once.

accumulator(self, value, accum_param=None)

source code 

Create an Accumulator with the given initial value, using a given AccumulatorParam helper object to define how to add values of the data type if provided. Default AccumulatorParams are used for integers and floating-point numbers if you do not provide one. For other types, a custom AccumulatorParam can be used.

addFile(self, path)

source code 

Add a file to be downloaded with this Spark job on every node. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI.

To access the file in Spark jobs, use SparkFiles.get(path) to find its download location.

>>> from pyspark import SparkFiles
>>> path = os.path.join(tempdir, "test.txt")
>>> with open(path, "w") as testFile:
...    testFile.write("100")
>>> sc.addFile(path)
>>> def func(iterator):
...    with open(SparkFiles.get("test.txt")) as testFile:
...        fileVal = int(testFile.readline())
...        return [x * 100 for x in iterator]
>>> sc.parallelize([1, 2, 3, 4]).mapPartitions(func).collect()
[100, 200, 300, 400]

addPyFile(self, path)

source code 

Add a .py or .zip dependency for all tasks to be executed on this SparkContext in the future. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI.

setCheckpointDir(self, dirName, useExisting=False)

source code 

Set the directory under which RDDs are going to be checkpointed. The directory must be a HDFS path if running on a cluster.

If the directory does not exist, it will be created. If the directory exists and useExisting is set to true, then the exisiting directory will be used. Otherwise an exception will be thrown to prevent accidental overriding of checkpoint files in the existing directory.