org.apache.spark.api.java
Class JavaSparkContext

Object
  extended by org.apache.spark.api.java.JavaSparkContext
All Implemented Interfaces:
java.io.Closeable

public class JavaSparkContext
extends Object
implements java.io.Closeable

A Java-friendly version of SparkContext that returns JavaRDDs and works with Java collections instead of Scala ones.

Only one SparkContext may be active per JVM. You must stop() the active SparkContext before creating a new one. This limitation may eventually be removed; see SPARK-2243 for more details.


Constructor Summary
JavaSparkContext()
          Create a JavaSparkContext that loads settings from system properties (for instance, when launching with ./bin/spark-submit).
JavaSparkContext(SparkConf conf)
           
JavaSparkContext(SparkContext sc)
           
JavaSparkContext(String master, String appName)
           
JavaSparkContext(String master, String appName, SparkConf conf)
           
JavaSparkContext(String master, String appName, String sparkHome, String jarFile)
           
JavaSparkContext(String master, String appName, String sparkHome, String[] jars)
           
JavaSparkContext(String master, String appName, String sparkHome, String[] jars, java.util.Map<String,String> environment)
           
 
Method Summary
<T,R> Accumulable<T,R>
accumulable(T initialValue, AccumulableParam<T,R> param)
          Create an Accumulable shared variable of the given type, to which tasks can "add" values with add.
<T,R> Accumulable<T,R>
accumulable(T initialValue, String name, AccumulableParam<T,R> param)
          Create an Accumulable shared variable of the given type, to which tasks can "add" values with add.
 Accumulator<Double> accumulator(double initialValue)
          Create an Accumulator double variable, which tasks can "add" values to using the add method.
 Accumulator<Double> accumulator(double initialValue, String name)
          Create an Accumulator double variable, which tasks can "add" values to using the add method.
 Accumulator<Integer> accumulator(int initialValue)
          Create an Accumulator integer variable, which tasks can "add" values to using the add method.
 Accumulator<Integer> accumulator(int initialValue, String name)
          Create an Accumulator integer variable, which tasks can "add" values to using the add method.
<T> Accumulator<T>
accumulator(T initialValue, AccumulatorParam<T> accumulatorParam)
          Create an Accumulator variable of a given type, which tasks can "add" values to using the add method.
<T> Accumulator<T>
accumulator(T initialValue, String name, AccumulatorParam<T> accumulatorParam)
          Create an Accumulator variable of a given type, which tasks can "add" values to using the add method.
 void addFile(String path)
          Add a file to be downloaded with this Spark job on every node.
 void addJar(String path)
          Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
 String appName()
           
 JavaPairRDD<String,PortableDataStream> binaryFiles(String path)
          :: Experimental ::
 JavaPairRDD<String,PortableDataStream> binaryFiles(String path, int minPartitions)
          Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array.
 JavaRDD<byte[]> binaryRecords(String path, int recordLength)
          :: Experimental ::
<T> Broadcast<T>
broadcast(T value)
          Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions.
 void cancelAllJobs()
          Cancel all jobs that have been scheduled or are running.
 void cancelJobGroup(String groupId)
          Cancel active jobs for the specified group.
 void clearCallSite()
          Pass-through to SparkContext.setCallSite.
 void clearFiles()
          Clear the job's list of files added by addFile so that they do not get downloaded to any new nodes.
 void clearJars()
          Clear the job's list of JARs added by addJar so that they do not get downloaded to any new nodes.
 void clearJobGroup()
          Clear the current thread's job group ID and its description.
 void close()
           
 Integer defaultMinPartitions()
          Default min number of partitions for Hadoop RDDs when not given by user
 Integer defaultMinSplits()
          Deprecated. As of Spark 1.0.0, defaultMinSplits is deprecated, use defaultMinPartitions() instead
 Integer defaultParallelism()
          Default level of parallelism to use when not given by user (e.g.
 Accumulator<Double> doubleAccumulator(double initialValue)
          Create an Accumulator double variable, which tasks can "add" values to using the add method.
 Accumulator<Double> doubleAccumulator(double initialValue, String name)
          Create an Accumulator double variable, which tasks can "add" values to using the add method.
<T> JavaRDD<T>
emptyRDD()
          Get an RDD that has no partitions or elements.
 SparkEnv env()
           
static JavaSparkContext fromSparkContext(SparkContext sc)
           
 com.google.common.base.Optional<String> getCheckpointDir()
           
 SparkConf getConf()
          Return a copy of this JavaSparkContext's configuration.
 String getLocalProperty(String key)
          Get a local property set in this thread, or null if it is missing.
 com.google.common.base.Optional<String> getSparkHome()
          Get Spark's home location from either a value set through the constructor, or the spark.home Java property, or the SPARK_HOME environment variable (in that order of preference).
 org.apache.hadoop.conf.Configuration hadoopConfiguration()
          Returns the Hadoop configuration used for the Hadoop code (e.g.
<K,V,F extends org.apache.hadoop.mapred.InputFormat<K,V>>
JavaPairRDD<K,V>
hadoopFile(String path, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass)
          Get an RDD for a Hadoop file with an arbitrary InputFormat
<K,V,F extends org.apache.hadoop.mapred.InputFormat<K,V>>
JavaPairRDD<K,V>
hadoopFile(String path, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass, int minPartitions)
          Get an RDD for a Hadoop file with an arbitrary InputFormat.
<K,V,F extends org.apache.hadoop.mapred.InputFormat<K,V>>
JavaPairRDD<K,V>
hadoopRDD(org.apache.hadoop.mapred.JobConf conf, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass)
          Get an RDD for a Hadoop-readable dataset from a Hadooop JobConf giving its InputFormat and any other necessary info (e.g.
<K,V,F extends org.apache.hadoop.mapred.InputFormat<K,V>>
JavaPairRDD<K,V>
hadoopRDD(org.apache.hadoop.mapred.JobConf conf, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass, int minPartitions)
          Get an RDD for a Hadoop-readable dataset from a Hadooop JobConf giving its InputFormat and any other necessary info (e.g.
 Accumulator<Integer> intAccumulator(int initialValue)
          Create an Accumulator integer variable, which tasks can "add" values to using the add method.
 Accumulator<Integer> intAccumulator(int initialValue, String name)
          Create an Accumulator integer variable, which tasks can "add" values to using the add method.
 Boolean isLocal()
           
static String[] jarOfClass(Class<?> cls)
          Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to SparkContext.
static String[] jarOfObject(Object obj)
          Find the JAR that contains the class of a particular object, to make it easy for users to pass their JARs to SparkContext.
 java.util.List<String> jars()
           
 String master()
           
<K,V,F extends org.apache.hadoop.mapreduce.InputFormat<K,V>>
JavaPairRDD<K,V>
newAPIHadoopFile(String path, Class<F> fClass, Class<K> kClass, Class<V> vClass, org.apache.hadoop.conf.Configuration conf)
          Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.
<K,V,F extends org.apache.hadoop.mapreduce.InputFormat<K,V>>
JavaPairRDD<K,V>
newAPIHadoopRDD(org.apache.hadoop.conf.Configuration conf, Class<F> fClass, Class<K> kClass, Class<V> vClass)
          Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.
<T> JavaRDD<T>
objectFile(String path)
          Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition.
<T> JavaRDD<T>
objectFile(String path, int minPartitions)
          Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition.
<T> JavaRDD<T>
parallelize(java.util.List<T> list)
          Distribute a local Scala collection to form an RDD.
<T> JavaRDD<T>
parallelize(java.util.List<T> list, int numSlices)
          Distribute a local Scala collection to form an RDD.
 JavaDoubleRDD parallelizeDoubles(java.util.List<Double> list)
          Distribute a local Scala collection to form an RDD.
 JavaDoubleRDD parallelizeDoubles(java.util.List<Double> list, int numSlices)
          Distribute a local Scala collection to form an RDD.
<K,V> JavaPairRDD<K,V>
parallelizePairs(java.util.List<scala.Tuple2<K,V>> list)
          Distribute a local Scala collection to form an RDD.
<K,V> JavaPairRDD<K,V>
parallelizePairs(java.util.List<scala.Tuple2<K,V>> list, int numSlices)
          Distribute a local Scala collection to form an RDD.
 SparkContext sc()
           
<K,V> JavaPairRDD<K,V>
sequenceFile(String path, Class<K> keyClass, Class<V> valueClass)
          Get an RDD for a Hadoop SequenceFile.
<K,V> JavaPairRDD<K,V>
sequenceFile(String path, Class<K> keyClass, Class<V> valueClass, int minPartitions)
          Get an RDD for a Hadoop SequenceFile with given key and value types.
 void setCallSite(String site)
          Pass-through to SparkContext.setCallSite.
 void setCheckpointDir(String dir)
          Set the directory under which RDDs are going to be checkpointed.
 void setJobGroup(String groupId, String description)
          Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.
 void setJobGroup(String groupId, String description, boolean interruptOnCancel)
          Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.
 void setLocalProperty(String key, String value)
          Set a local property that affects jobs submitted from this thread, such as the Spark fair scheduler pool.
 void setLogLevel(String logLevel)
          Control our logLevel.
 String sparkUser()
           
 Long startTime()
           
 JavaSparkStatusTracker statusTracker()
           
 void stop()
          Shut down the SparkContext.
 JavaRDD<String> textFile(String path)
          Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
 JavaRDD<String> textFile(String path, int minPartitions)
          Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
static SparkContext toSparkContext(JavaSparkContext jsc)
           
 JavaDoubleRDD union(JavaDoubleRDD... rdds)
           
 JavaDoubleRDD union(JavaDoubleRDD first, java.util.List<JavaDoubleRDD> rest)
          Build the union of two or more RDDs.
<K,V> JavaPairRDD<K,V>
union(JavaPairRDD<K,V>... rdds)
           
<K,V> JavaPairRDD<K,V>
union(JavaPairRDD<K,V> first, java.util.List<JavaPairRDD<K,V>> rest)
          Build the union of two or more RDDs.
<T> JavaRDD<T>
union(JavaRDD<T>... rdds)
           
<T> JavaRDD<T>
union(JavaRDD<T> first, java.util.List<JavaRDD<T>> rest)
          Build the union of two or more RDDs.
 String version()
          The version of Spark on which this application is running.
 JavaPairRDD<String,String> wholeTextFiles(String path)
          Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
 JavaPairRDD<String,String> wholeTextFiles(String path, int minPartitions)
          Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
 
Methods inherited from class Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

JavaSparkContext

public JavaSparkContext(SparkContext sc)

JavaSparkContext

public JavaSparkContext()
Create a JavaSparkContext that loads settings from system properties (for instance, when launching with ./bin/spark-submit).


JavaSparkContext

public JavaSparkContext(SparkConf conf)
Parameters:
conf - a SparkConf object specifying Spark parameters

JavaSparkContext

public JavaSparkContext(String master,
                        String appName)
Parameters:
master - Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
appName - A name for your application, to display on the cluster web UI

JavaSparkContext

public JavaSparkContext(String master,
                        String appName,
                        SparkConf conf)
Parameters:
master - Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
appName - A name for your application, to display on the cluster web UI
conf - a SparkConf object specifying other Spark parameters

JavaSparkContext

public JavaSparkContext(String master,
                        String appName,
                        String sparkHome,
                        String jarFile)
Parameters:
master - Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
appName - A name for your application, to display on the cluster web UI
sparkHome - The SPARK_HOME directory on the slave nodes
jarFile - JAR file to send to the cluster. This can be a path on the local file system or an HDFS, HTTP, HTTPS, or FTP URL.

JavaSparkContext

public JavaSparkContext(String master,
                        String appName,
                        String sparkHome,
                        String[] jars)
Parameters:
master - Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
appName - A name for your application, to display on the cluster web UI
sparkHome - The SPARK_HOME directory on the slave nodes
jars - Collection of JARs to send to the cluster. These can be paths on the local file system or HDFS, HTTP, HTTPS, or FTP URLs.

JavaSparkContext

public JavaSparkContext(String master,
                        String appName,
                        String sparkHome,
                        String[] jars,
                        java.util.Map<String,String> environment)
Parameters:
master - Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
appName - A name for your application, to display on the cluster web UI
sparkHome - The SPARK_HOME directory on the slave nodes
jars - Collection of JARs to send to the cluster. These can be paths on the local file system or HDFS, HTTP, HTTPS, or FTP URLs.
environment - Environment variables to set on worker nodes
Method Detail

fromSparkContext

public static JavaSparkContext fromSparkContext(SparkContext sc)

toSparkContext

public static SparkContext toSparkContext(JavaSparkContext jsc)

jarOfClass

public static String[] jarOfClass(Class<?> cls)
Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to SparkContext.

Parameters:
cls - (undocumented)
Returns:
(undocumented)

jarOfObject

public static String[] jarOfObject(Object obj)
Find the JAR that contains the class of a particular object, to make it easy for users to pass their JARs to SparkContext. In most cases you can call jarOfObject(this) in your driver program.

Parameters:
obj - (undocumented)
Returns:
(undocumented)

sc

public SparkContext sc()

env

public SparkEnv env()

statusTracker

public JavaSparkStatusTracker statusTracker()

isLocal

public Boolean isLocal()

sparkUser

public String sparkUser()

master

public String master()

appName

public String appName()

jars

public java.util.List<String> jars()

startTime

public Long startTime()

version

public String version()
The version of Spark on which this application is running.


defaultParallelism

public Integer defaultParallelism()
Default level of parallelism to use when not given by user (e.g. parallelize and makeRDD).


defaultMinSplits

public Integer defaultMinSplits()
Deprecated. As of Spark 1.0.0, defaultMinSplits is deprecated, use defaultMinPartitions() instead

Default min number of partitions for Hadoop RDDs when not given by user.

Returns:
(undocumented)

defaultMinPartitions

public Integer defaultMinPartitions()
Default min number of partitions for Hadoop RDDs when not given by user


parallelize

public <T> JavaRDD<T> parallelize(java.util.List<T> list,
                                  int numSlices)
Distribute a local Scala collection to form an RDD.


emptyRDD

public <T> JavaRDD<T> emptyRDD()
Get an RDD that has no partitions or elements.


parallelize

public <T> JavaRDD<T> parallelize(java.util.List<T> list)
Distribute a local Scala collection to form an RDD.


parallelizePairs

public <K,V> JavaPairRDD<K,V> parallelizePairs(java.util.List<scala.Tuple2<K,V>> list,
                                               int numSlices)
Distribute a local Scala collection to form an RDD.


parallelizePairs

public <K,V> JavaPairRDD<K,V> parallelizePairs(java.util.List<scala.Tuple2<K,V>> list)
Distribute a local Scala collection to form an RDD.


parallelizeDoubles

public JavaDoubleRDD parallelizeDoubles(java.util.List<Double> list,
                                        int numSlices)
Distribute a local Scala collection to form an RDD.


parallelizeDoubles

public JavaDoubleRDD parallelizeDoubles(java.util.List<Double> list)
Distribute a local Scala collection to form an RDD.


textFile

public JavaRDD<String> textFile(String path)
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.

Parameters:
path - (undocumented)
Returns:
(undocumented)

textFile

public JavaRDD<String> textFile(String path,
                                int minPartitions)
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.

Parameters:
path - (undocumented)
minPartitions - (undocumented)
Returns:
(undocumented)

wholeTextFiles

public JavaPairRDD<String,String> wholeTextFiles(String path,
                                                 int minPartitions)
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.

For example, if you have the following files:


   hdfs://a-hdfs-path/part-00000
   hdfs://a-hdfs-path/part-00001
   ...
   hdfs://a-hdfs-path/part-nnnnn
 

Do


   JavaPairRDD<String, String> rdd = sparkContext.wholeTextFiles("hdfs://a-hdfs-path")
 

then rdd contains


   (a-hdfs-path/part-00000, its content)
   (a-hdfs-path/part-00001, its content)
   ...
   (a-hdfs-path/part-nnnnn, its content)
 

Parameters:
minPartitions - A suggestion value of the minimal splitting number for input data.
path - (undocumented)
Returns:
(undocumented)

wholeTextFiles

public JavaPairRDD<String,String> wholeTextFiles(String path)
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.

Parameters:
path - (undocumented)
Returns:
(undocumented)
See Also:
wholeTextFiles(path: String, minPartitions: Int).

binaryFiles

public JavaPairRDD<String,PortableDataStream> binaryFiles(String path,
                                                          int minPartitions)
Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.

For example, if you have the following files:


   hdfs://a-hdfs-path/part-00000
   hdfs://a-hdfs-path/part-00001
   ...
   hdfs://a-hdfs-path/part-nnnnn
 

Do JavaPairRDD rdd = sparkContext.dataStreamFiles("hdfs://a-hdfs-path"),

then rdd contains


   (a-hdfs-path/part-00000, its content)
   (a-hdfs-path/part-00001, its content)
   ...
   (a-hdfs-path/part-nnnnn, its content)
 

Parameters:
minPartitions - A suggestion value of the minimal splitting number for input data.
path - (undocumented)
Returns:
(undocumented)

binaryFiles

public JavaPairRDD<String,PortableDataStream> binaryFiles(String path)
:: Experimental ::

Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.

For example, if you have the following files:


   hdfs://a-hdfs-path/part-00000
   hdfs://a-hdfs-path/part-00001
   ...
   hdfs://a-hdfs-path/part-nnnnn
 

Do JavaPairRDD rdd = sparkContext.dataStreamFiles("hdfs://a-hdfs-path"),

then rdd contains


   (a-hdfs-path/part-00000, its content)
   (a-hdfs-path/part-00001, its content)
   ...
   (a-hdfs-path/part-nnnnn, its content)
 

Parameters:
path - (undocumented)
Returns:
(undocumented)

binaryRecords

public JavaRDD<byte[]> binaryRecords(String path,
                                     int recordLength)
:: Experimental ::

Load data from a flat binary file, assuming the length of each record is constant.

Parameters:
path - Directory to the input data files
recordLength - (undocumented)
Returns:
An RDD of data with values, represented as byte arrays

sequenceFile

public <K,V> JavaPairRDD<K,V> sequenceFile(String path,
                                           Class<K> keyClass,
                                           Class<V> valueClass,
                                           int minPartitions)
Get an RDD for a Hadoop SequenceFile with given key and value types.

'''Note:''' Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.

Parameters:
path - (undocumented)
keyClass - (undocumented)
valueClass - (undocumented)
minPartitions - (undocumented)
Returns:
(undocumented)

sequenceFile

public <K,V> JavaPairRDD<K,V> sequenceFile(String path,
                                           Class<K> keyClass,
                                           Class<V> valueClass)
Get an RDD for a Hadoop SequenceFile.

'''Note:''' Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.

Parameters:
path - (undocumented)
keyClass - (undocumented)
valueClass - (undocumented)
Returns:
(undocumented)

objectFile

public <T> JavaRDD<T> objectFile(String path,
                                 int minPartitions)
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition. This is still an experimental storage format and may not be supported exactly as is in future Spark releases. It will also be pretty slow if you use the default serializer (Java serialization), though the nice thing about it is that there's very little effort required to save arbitrary objects.

Parameters:
path - (undocumented)
minPartitions - (undocumented)
Returns:
(undocumented)

objectFile

public <T> JavaRDD<T> objectFile(String path)
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition. This is still an experimental storage format and may not be supported exactly as is in future Spark releases. It will also be pretty slow if you use the default serializer (Java serialization), though the nice thing about it is that there's very little effort required to save arbitrary objects.

Parameters:
path - (undocumented)
Returns:
(undocumented)

hadoopRDD

public <K,V,F extends org.apache.hadoop.mapred.InputFormat<K,V>> JavaPairRDD<K,V> hadoopRDD(org.apache.hadoop.mapred.JobConf conf,
                                                                                            Class<F> inputFormatClass,
                                                                                            Class<K> keyClass,
                                                                                            Class<V> valueClass,
                                                                                            int minPartitions)
Get an RDD for a Hadoop-readable dataset from a Hadooop JobConf giving its InputFormat and any other necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable, etc).

Parameters:
conf - JobConf for setting up the dataset. Note: This will be put into a Broadcast. Therefore if you plan to reuse this conf to create multiple RDDs, you need to make sure you won't modify the conf. A safe approach is always creating a new conf for a new RDD.
inputFormatClass - Class of the InputFormat
keyClass - Class of the keys
valueClass - Class of the values
minPartitions - Minimum number of Hadoop Splits to generate.

'''Note:''' Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.

Returns:
(undocumented)

hadoopRDD

public <K,V,F extends org.apache.hadoop.mapred.InputFormat<K,V>> JavaPairRDD<K,V> hadoopRDD(org.apache.hadoop.mapred.JobConf conf,
                                                                                            Class<F> inputFormatClass,
                                                                                            Class<K> keyClass,
                                                                                            Class<V> valueClass)
Get an RDD for a Hadoop-readable dataset from a Hadooop JobConf giving its InputFormat and any other necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable,

Parameters:
conf - JobConf for setting up the dataset. Note: This will be put into a Broadcast. Therefore if you plan to reuse this conf to create multiple RDDs, you need to make sure you won't modify the conf. A safe approach is always creating a new conf for a new RDD.
inputFormatClass - Class of the InputFormat
keyClass - Class of the keys
valueClass - Class of the values

'''Note:''' Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.

Returns:
(undocumented)

hadoopFile

public <K,V,F extends org.apache.hadoop.mapred.InputFormat<K,V>> JavaPairRDD<K,V> hadoopFile(String path,
                                                                                             Class<F> inputFormatClass,
                                                                                             Class<K> keyClass,
                                                                                             Class<V> valueClass,
                                                                                             int minPartitions)
Get an RDD for a Hadoop file with an arbitrary InputFormat.

'''Note:''' Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.

Parameters:
path - (undocumented)
inputFormatClass - (undocumented)
keyClass - (undocumented)
valueClass - (undocumented)
minPartitions - (undocumented)
Returns:
(undocumented)

hadoopFile

public <K,V,F extends org.apache.hadoop.mapred.InputFormat<K,V>> JavaPairRDD<K,V> hadoopFile(String path,
                                                                                             Class<F> inputFormatClass,
                                                                                             Class<K> keyClass,
                                                                                             Class<V> valueClass)
Get an RDD for a Hadoop file with an arbitrary InputFormat

'''Note:''' Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.

Parameters:
path - (undocumented)
inputFormatClass - (undocumented)
keyClass - (undocumented)
valueClass - (undocumented)
Returns:
(undocumented)

newAPIHadoopFile

public <K,V,F extends org.apache.hadoop.mapreduce.InputFormat<K,V>> JavaPairRDD<K,V> newAPIHadoopFile(String path,
                                                                                                      Class<F> fClass,
                                                                                                      Class<K> kClass,
                                                                                                      Class<V> vClass,
                                                                                                      org.apache.hadoop.conf.Configuration conf)
Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.

'''Note:''' Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.

Parameters:
path - (undocumented)
fClass - (undocumented)
kClass - (undocumented)
vClass - (undocumented)
conf - (undocumented)
Returns:
(undocumented)

newAPIHadoopRDD

public <K,V,F extends org.apache.hadoop.mapreduce.InputFormat<K,V>> JavaPairRDD<K,V> newAPIHadoopRDD(org.apache.hadoop.conf.Configuration conf,
                                                                                                     Class<F> fClass,
                                                                                                     Class<K> kClass,
                                                                                                     Class<V> vClass)
Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.

Parameters:
conf - Configuration for setting up the dataset. Note: This will be put into a Broadcast. Therefore if you plan to reuse this conf to create multiple RDDs, you need to make sure you won't modify the conf. A safe approach is always creating a new conf for a new RDD.
fClass - Class of the InputFormat
kClass - Class of the keys
vClass - Class of the values

'''Note:''' Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.

Returns:
(undocumented)

union

public <T> JavaRDD<T> union(JavaRDD<T> first,
                            java.util.List<JavaRDD<T>> rest)
Build the union of two or more RDDs.


union

public <K,V> JavaPairRDD<K,V> union(JavaPairRDD<K,V> first,
                                    java.util.List<JavaPairRDD<K,V>> rest)
Build the union of two or more RDDs.


union

public JavaDoubleRDD union(JavaDoubleRDD first,
                           java.util.List<JavaDoubleRDD> rest)
Build the union of two or more RDDs.


intAccumulator

public Accumulator<Integer> intAccumulator(int initialValue)
Create an Accumulator integer variable, which tasks can "add" values to using the add method. Only the master can access the accumulator's value.

Parameters:
initialValue - (undocumented)
Returns:
(undocumented)

intAccumulator

public Accumulator<Integer> intAccumulator(int initialValue,
                                           String name)
Create an Accumulator integer variable, which tasks can "add" values to using the add method. Only the master can access the accumulator's value.

This version supports naming the accumulator for display in Spark's web UI.

Parameters:
initialValue - (undocumented)
name - (undocumented)
Returns:
(undocumented)

doubleAccumulator

public Accumulator<Double> doubleAccumulator(double initialValue)
Create an Accumulator double variable, which tasks can "add" values to using the add method. Only the master can access the accumulator's value.

Parameters:
initialValue - (undocumented)
Returns:
(undocumented)

doubleAccumulator

public Accumulator<Double> doubleAccumulator(double initialValue,
                                             String name)
Create an Accumulator double variable, which tasks can "add" values to using the add method. Only the master can access the accumulator's value.

This version supports naming the accumulator for display in Spark's web UI.

Parameters:
initialValue - (undocumented)
name - (undocumented)
Returns:
(undocumented)

accumulator

public Accumulator<Integer> accumulator(int initialValue)
Create an Accumulator integer variable, which tasks can "add" values to using the add method. Only the master can access the accumulator's value.

Parameters:
initialValue - (undocumented)
Returns:
(undocumented)

accumulator

public Accumulator<Integer> accumulator(int initialValue,
                                        String name)
Create an Accumulator integer variable, which tasks can "add" values to using the add method. Only the master can access the accumulator's value.

This version supports naming the accumulator for display in Spark's web UI.

Parameters:
initialValue - (undocumented)
name - (undocumented)
Returns:
(undocumented)

accumulator

public Accumulator<Double> accumulator(double initialValue)
Create an Accumulator double variable, which tasks can "add" values to using the add method. Only the master can access the accumulator's value.

Parameters:
initialValue - (undocumented)
Returns:
(undocumented)

accumulator

public Accumulator<Double> accumulator(double initialValue,
                                       String name)
Create an Accumulator double variable, which tasks can "add" values to using the add method. Only the master can access the accumulator's value.

This version supports naming the accumulator for display in Spark's web UI.

Parameters:
initialValue - (undocumented)
name - (undocumented)
Returns:
(undocumented)

accumulator

public <T> Accumulator<T> accumulator(T initialValue,
                                      AccumulatorParam<T> accumulatorParam)
Create an Accumulator variable of a given type, which tasks can "add" values to using the add method. Only the master can access the accumulator's value.

Parameters:
initialValue - (undocumented)
accumulatorParam - (undocumented)
Returns:
(undocumented)

accumulator

public <T> Accumulator<T> accumulator(T initialValue,
                                      String name,
                                      AccumulatorParam<T> accumulatorParam)
Create an Accumulator variable of a given type, which tasks can "add" values to using the add method. Only the master can access the accumulator's value.

This version supports naming the accumulator for display in Spark's web UI.

Parameters:
initialValue - (undocumented)
name - (undocumented)
accumulatorParam - (undocumented)
Returns:
(undocumented)

accumulable

public <T,R> Accumulable<T,R> accumulable(T initialValue,
                                          AccumulableParam<T,R> param)
Create an Accumulable shared variable of the given type, to which tasks can "add" values with add. Only the master can access the accumuable's value.

Parameters:
initialValue - (undocumented)
param - (undocumented)
Returns:
(undocumented)

accumulable

public <T,R> Accumulable<T,R> accumulable(T initialValue,
                                          String name,
                                          AccumulableParam<T,R> param)
Create an Accumulable shared variable of the given type, to which tasks can "add" values with add. Only the master can access the accumuable's value.

This version supports naming the accumulator for display in Spark's web UI.

Parameters:
initialValue - (undocumented)
name - (undocumented)
param - (undocumented)
Returns:
(undocumented)

broadcast

public <T> Broadcast<T> broadcast(T value)
Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions. The variable will be sent to each cluster only once.

Parameters:
value - (undocumented)
Returns:
(undocumented)

stop

public void stop()
Shut down the SparkContext.


close

public void close()
Specified by:
close in interface java.io.Closeable

getSparkHome

public com.google.common.base.Optional<String> getSparkHome()
Get Spark's home location from either a value set through the constructor, or the spark.home Java property, or the SPARK_HOME environment variable (in that order of preference). If neither of these is set, return None.

Returns:
(undocumented)

addFile

public void addFile(String path)
Add a file to be downloaded with this Spark job on every node. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI. To access the file in Spark jobs, use SparkFiles.get(fileName) to find its download location.

Parameters:
path - (undocumented)

addJar

public void addJar(String path)
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI.

Parameters:
path - (undocumented)

clearJars

public void clearJars()
Clear the job's list of JARs added by addJar so that they do not get downloaded to any new nodes.


clearFiles

public void clearFiles()
Clear the job's list of files added by addFile so that they do not get downloaded to any new nodes.


hadoopConfiguration

public org.apache.hadoop.conf.Configuration hadoopConfiguration()
Returns the Hadoop configuration used for the Hadoop code (e.g. file systems) we reuse.

'''Note:''' As it will be reused in all Hadoop RDDs, it's better not to modify it unless you plan to set some global configurations for all Hadoop RDDs.

Returns:
(undocumented)

setCheckpointDir

public void setCheckpointDir(String dir)
Set the directory under which RDDs are going to be checkpointed. The directory must be a HDFS path if running on a cluster.

Parameters:
dir - (undocumented)

getCheckpointDir

public com.google.common.base.Optional<String> getCheckpointDir()

getConf

public SparkConf getConf()
Return a copy of this JavaSparkContext's configuration. The configuration ''cannot'' be changed at runtime.

Returns:
(undocumented)

setCallSite

public void setCallSite(String site)
Pass-through to SparkContext.setCallSite. For API support only.

Parameters:
site - (undocumented)

clearCallSite

public void clearCallSite()
Pass-through to SparkContext.setCallSite. For API support only.


setLocalProperty

public void setLocalProperty(String key,
                             String value)
Set a local property that affects jobs submitted from this thread, such as the Spark fair scheduler pool.

Parameters:
key - (undocumented)
value - (undocumented)

getLocalProperty

public String getLocalProperty(String key)
Get a local property set in this thread, or null if it is missing. See org.apache.spark.api.java.JavaSparkContext.setLocalProperty.

Parameters:
key - (undocumented)
Returns:
(undocumented)

setLogLevel

public void setLogLevel(String logLevel)
Control our logLevel. This overrides any user-defined log settings.

Parameters:
logLevel - The desired log level as a string. Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN

setJobGroup

public void setJobGroup(String groupId,
                        String description,
                        boolean interruptOnCancel)
Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.

Often, a unit of execution in an application consists of multiple Spark actions or jobs. Application programmers can use this method to group all those jobs together and give a group description. Once set, the Spark web UI will associate such jobs with this group.

The application can also use org.apache.spark.api.java.JavaSparkContext.cancelJobGroup to cancel all running jobs in this group. For example,


 // In the main thread:
 sc.setJobGroup("some_job_to_cancel", "some job description");
 rdd.map(...).count();

 // In a separate thread:
 sc.cancelJobGroup("some_job_to_cancel");
 

If interruptOnCancel is set to true for the job group, then job cancellation will result in Thread.interrupt() being called on the job's executor threads. This is useful to help ensure that the tasks are actually stopped in a timely manner, but is off by default due to HDFS-1208, where HDFS may respond to Thread.interrupt() by marking nodes as dead.

Parameters:
groupId - (undocumented)
description - (undocumented)
interruptOnCancel - (undocumented)

setJobGroup

public void setJobGroup(String groupId,
                        String description)
Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.

Parameters:
groupId - (undocumented)
description - (undocumented)
See Also:
setJobGroup(groupId: String, description: String, interruptThread: Boolean). This method sets interruptOnCancel to false.

clearJobGroup

public void clearJobGroup()
Clear the current thread's job group ID and its description.


cancelJobGroup

public void cancelJobGroup(String groupId)
Cancel active jobs for the specified group. See org.apache.spark.api.java.JavaSparkContext.setJobGroup for more information.

Parameters:
groupId - (undocumented)

cancelAllJobs

public void cancelAllJobs()
Cancel all jobs that have been scheduled or are running.


union

public <T> JavaRDD<T> union(JavaRDD<T>... rdds)

union

public JavaDoubleRDD union(JavaDoubleRDD... rdds)

union

public <K,V> JavaPairRDD<K,V> union(JavaPairRDD<K,V>... rdds)