Main entry point for Spark functionality. A SparkContext represents the
connection to a Spark cluster, and can be used to create RDD and
broadcast variables on that cluster.
When you create a new SparkContext, at least the master and app name should
be set, either through the named parameters here or through conf.
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local).
A name for your job, to display on the cluster web UI.
Location where Spark is installed on cluster nodes.
Collection of .zip or .py files to send to the cluster
and add to PYTHONPATH. These can be paths on the local file
system or HDFS, HTTP, HTTPS, or FTP URLs.
A dictionary of environment variables to set on
The number of Python objects represented as a single
Java object. Set 1 to disable batching, 0 to automatically choose
the batch size based on object sizes, or -1 to use an unlimited
The serializer for RDDs.
An object setting Spark properties.
Use an existing gateway and JVM, otherwise a new JVM
will be instantiated. This is only used internally.
The JavaSparkContext instance. This is only used internally.
A class of custom Profiler used to do profiling
(default is pyspark.profiler.BasicProfiler).
A class of custom Profiler used to do udf profiling
(default is pyspark.profiler.UDFBasicProfiler).
Only one SparkContext should be active per JVM. You must stop()
the active SparkContext before creating a new one.
SparkContext instance is not supported to share across multiple
processes out of the box, and PySpark does not guarantee multi-processing execution.
Use threads instead for concurrent processing purpose.
>>> from pyspark.context import SparkContext
>>> sc = SparkContext('local', 'test')
>>> sc2 = SparkContext('local', 'test2')
Traceback (most recent call last):
Create an Accumulator with the given initial value, using a given AccumulatorParam helper object to define how to add values of the data type if provided.
Add an archive to be downloaded with this Spark job on every node.
Add a file to be downloaded with this Spark job on every node.
Add a .py or .zip dependency for all tasks to be executed on this SparkContext in the future.
Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array.
Load data from a flat binary file, assuming each record is a set of numbers with the specified numerical format (see ByteBuffer), and the number of bytes per record is constant.
Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions.
Cancel all jobs that have been scheduled or are running.
Cancel active jobs for the specified group.
Dump the profile stats into directory path
Create an RDD that has no partitions or elements.
Return the directory where RDDs are checkpointed.
Get a local property set in this thread, or null if it is missing.
Get or instantiate a SparkContext and register it as a singleton object.
hadoopFile(path, inputFormatClass, keyClass, …)
Read an ‘old’ Hadoop InputFormat with arbitrary key and value class from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
hadoopRDD(inputFormatClass, keyClass, valueClass)
Read an ‘old’ Hadoop InputFormat with arbitrary key and value class, from an arbitrary Hadoop configuration, which is passed in as a Python dict.
newAPIHadoopFile(path, inputFormatClass, …)
Read a ‘new API’ Hadoop InputFormat with arbitrary key and value class from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
newAPIHadoopRDD(inputFormatClass, keyClass, …)
Read a ‘new API’ Hadoop InputFormat with arbitrary key and value class, from an arbitrary Hadoop configuration, which is passed in as a Python dict.
Distribute a local Python collection to form an RDD.
Load an RDD previously saved using RDD.saveAsPickleFile() method.
range(start[, end, step, numSlices])
Create a new RDD of int containing elements from start to end (exclusive), increased by step every element.
runJob(rdd, partitionFunc[, partitions, …])
Executes the given partitionFunc on the specified set of partitions, returning the result as an array of elements.
sequenceFile(path[, keyClass, valueClass, …])
Read a Hadoop SequenceFile with arbitrary key and value Writable class from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
Set the directory under which RDDs are going to be checkpointed.
Set a human readable description of the current job.
setJobGroup(groupId, description[, …])
Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.
Set a local property that affects jobs submitted from this thread, such as the Spark fair scheduler pool.
Control our logLevel.
Set a Java system property, such as spark.executor.memory.
Print the profile stats to stdout
Get SPARK_USER for user who is running SparkContext.
Return StatusTracker object
Shut down the SparkContext.
textFile(name[, minPartitions, use_unicode])
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
Build the union of a list of RDDs.
wholeTextFiles(path[, minPartitions, …])
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
A unique identifier for the Spark application.
Default min number of partitions for Hadoop RDDs when not given by user
Default level of parallelism to use when not given by user (e.g.
Return the epoch time when the Spark Context was started.
Return the URL of the SparkUI instance started by this SparkContext
The version of Spark on which this application is running.