org.apache.spark.rdd
Class RDD<T>

Object
  extended by org.apache.spark.rdd.RDD<T>
All Implemented Interfaces:
java.io.Serializable, Logging
Direct Known Subclasses:
BaseRRDD, CoGroupedRDD, EdgeRDD, HadoopRDD, JdbcRDD, NewHadoopRDD, PartitionPruningRDD, ShuffledRDD, UnionRDD, VertexRDD

public abstract class RDD<T>
extends Object
implements scala.Serializable, Logging

A Resilient Distributed Dataset (RDD), the basic abstraction in Spark. Represents an immutable, partitioned collection of elements that can be operated on in parallel. This class contains the basic operations available on all RDDs, such as map, filter, and persist. In addition, PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; DoubleRDDFunctions contains operations available only on RDDs of Doubles; and SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. All operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)] through implicit.

Internally, each RDD is characterized by five main properties:

- A list of partitions - A function for computing each split - A list of dependencies on other RDDs - Optionally, a Partitioner for key-value RDDs (e.g. to say that the RDD is hash-partitioned) - Optionally, a list of preferred locations to compute each split on (e.g. block locations for an HDFS file)

All of the scheduling and execution in Spark is done based on these methods, allowing each RDD to implement its own way of computing itself. Indeed, users can implement custom RDDs (e.g. for reading data from a new storage system) by overriding these functions. Please refer to the Spark paper for more details on RDD internals.

See Also:
Serialized Form

Constructor Summary
RDD(RDD<?> oneParent, scala.reflect.ClassTag<T> evidence$2)
          Construct an RDD with just a one-to-one dependency on one parent
RDD(SparkContext _sc, scala.collection.Seq<Dependency<?>> deps, scala.reflect.ClassTag<T> evidence$1)
           
 
Method Summary
<U> U
aggregate(U zeroValue, scala.Function2<U,T,U> seqOp, scala.Function2<U,U,U> combOp, scala.reflect.ClassTag<U> evidence$32)
          Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value".
 RDD<T> cache()
          Persist this RDD with the default storage level (`MEMORY_ONLY`).
<U> RDD<scala.Tuple2<T,U>>
cartesian(RDD<U> other, scala.reflect.ClassTag<U> evidence$5)
          Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in this and b is in other.
 void checkpoint()
          Mark this RDD for checkpointing.
 scala.Option<org.apache.spark.rdd.RDDCheckpointData<T>> checkpointData()
           
 RDD<T> coalesce(int numPartitions, boolean shuffle, scala.math.Ordering<T> ord)
          Return a new RDD that is reduced into numPartitions partitions.
 Object collect()
          Return an array that contains all of the elements in this RDD.
<U> RDD<U>
collect(scala.PartialFunction<T,U> f, scala.reflect.ClassTag<U> evidence$31)
          Return an RDD that contains all matching values by applying f.
abstract  scala.collection.Iterator<T> compute(Partition split, TaskContext context)
          :: DeveloperApi :: Implemented by subclasses to compute a given partition.
 SparkContext context()
          The SparkContext that this RDD was created on.
 long count()
          Return the number of elements in the RDD.
 PartialResult<BoundedDouble> countApprox(long timeout, double confidence)
          :: Experimental :: Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.
 long countApproxDistinct(double relativeSD)
          Return approximate number of distinct elements in the RDD.
 long countApproxDistinct(int p, int sp)
          :: Experimental :: Return approximate number of distinct elements in the RDD.
 scala.collection.Map<T,Object> countByValue(scala.math.Ordering<T> ord)
          Return the count of each unique value in this RDD as a local map of (value, count) pairs.
 PartialResult<scala.collection.Map<T,BoundedDouble>> countByValueApprox(long timeout, double confidence, scala.math.Ordering<T> ord)
          :: Experimental :: Approximate version of countByValue().
 org.apache.spark.util.CallSite creationSite()
          User code that created this RDD (e.g.
 scala.collection.Seq<Dependency<?>> dependencies()
          Get the list of dependencies of this RDD, taking into account whether the RDD is checkpointed or not.
 RDD<T> distinct()
          Return a new RDD containing the distinct elements in this RDD.
 RDD<T> distinct(int numPartitions, scala.math.Ordering<T> ord)
          Return a new RDD containing the distinct elements in this RDD.
static DoubleRDDFunctions doubleRDDToDoubleRDDFunctions(RDD<Object> rdd)
           
 RDD<T> filter(scala.Function1<T,Object> f)
          Return a new RDD containing only the elements that satisfy a predicate.
<A> RDD<T>
filterWith(scala.Function1<Object,A> constructA, scala.Function2<T,A,Object> p)
          Filters this RDD with p, where p takes an additional parameter of type A.
 T first()
          Return the first element in this RDD.
<U> RDD<U>
flatMap(scala.Function1<T,scala.collection.TraversableOnce<U>> f, scala.reflect.ClassTag<U> evidence$4)
          Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
<A,U> RDD<U>
flatMapWith(scala.Function1<Object,A> constructA, boolean preservesPartitioning, scala.Function2<T,A,scala.collection.Seq<U>> f, scala.reflect.ClassTag<U> evidence$11)
          FlatMaps f over this RDD, where f takes an additional parameter of type A.
 T fold(T zeroValue, scala.Function2<T,T,T> op)
          Aggregate the elements of each partition, and then the results for all the partitions, using a given associative and commutative function and a neutral "zero value".
 void foreach(scala.Function1<T,scala.runtime.BoxedUnit> f)
          Applies a function f to all elements of this RDD.
 void foreachPartition(scala.Function1<scala.collection.Iterator<T>,scala.runtime.BoxedUnit> f)
          Applies a function f to each partition of this RDD.
<A> void
foreachWith(scala.Function1<Object,A> constructA, scala.Function2<T,A,scala.runtime.BoxedUnit> f)
          Applies f to each element of this RDD, where f takes an additional parameter of type A.
 scala.Option<String> getCheckpointFile()
          Gets the name of the file to which this RDD was checkpointed
 StorageLevel getStorageLevel()
          Get the RDD's current storage level, or StorageLevel.NONE if none is set.
 RDD<Object> glom()
          Return an RDD created by coalescing all elements within each partition into an array.
<K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>>
groupBy(scala.Function1<T,K> f, scala.reflect.ClassTag<K> kt)
          Return an RDD of grouped items.
<K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>>
groupBy(scala.Function1<T,K> f, int numPartitions, scala.reflect.ClassTag<K> kt)
          Return an RDD of grouped elements.
<K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>>
groupBy(scala.Function1<T,K> f, Partitioner p, scala.reflect.ClassTag<K> kt, scala.math.Ordering<K> ord)
          Return an RDD of grouped items.
 int id()
          A unique ID for this RDD (within its SparkContext).
 RDD<T> intersection(RDD<T> other)
          Return the intersection of this RDD and another one.
 RDD<T> intersection(RDD<T> other, int numPartitions)
          Return the intersection of this RDD and another one.
 RDD<T> intersection(RDD<T> other, Partitioner partitioner, scala.math.Ordering<T> ord)
          Return the intersection of this RDD and another one.
 boolean isCheckpointed()
          Return whether this RDD has been checkpointed or not
 boolean isEmpty()
           
 scala.collection.Iterator<T> iterator(Partition split, TaskContext context)
          Internal method to this RDD; will read from cache if applicable, or otherwise compute it.
<K> RDD<scala.Tuple2<K,T>>
keyBy(scala.Function1<T,K> f)
          Creates tuples of the elements in this RDD by applying f.
<U> RDD<U>
map(scala.Function1<T,U> f, scala.reflect.ClassTag<U> evidence$3)
          Return a new RDD by applying a function to all elements of this RDD.
<U> RDD<U>
mapPartitions(scala.Function1<scala.collection.Iterator<T>,scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$6)
          Return a new RDD by applying a function to each partition of this RDD.
<U> RDD<U>
mapPartitionsWithContext(scala.Function2<TaskContext,scala.collection.Iterator<T>,scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$8)
          :: DeveloperApi :: Return a new RDD by applying a function to each partition of this RDD.
<U> RDD<U>
mapPartitionsWithIndex(scala.Function2<Object,scala.collection.Iterator<T>,scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$7)
          Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.
<U> RDD<U>
mapPartitionsWithSplit(scala.Function2<Object,scala.collection.Iterator<T>,scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$9)
          Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.
<A,U> RDD<U>
mapWith(scala.Function1<Object,A> constructA, boolean preservesPartitioning, scala.Function2<T,A,U> f, scala.reflect.ClassTag<U> evidence$10)
          Maps f over this RDD, where f takes an additional parameter of type A.
 T max(scala.math.Ordering<T> ord)
          Returns the max of this RDD as defined by the implicit Ordering[T].
 T min(scala.math.Ordering<T> ord)
          Returns the min of this RDD as defined by the implicit Ordering[T].
 String name()
          A friendly name for this RDD
static
<T> DoubleRDDFunctions
numericRDDToDoubleRDDFunctions(RDD<T> rdd, scala.math.Numeric<T> num)
           
 scala.Option<Partitioner> partitioner()
          Optionally overridden by subclasses to specify how they are partitioned.
 Partition[] partitions()
          Get the array of partitions of this RDD, taking into account whether the RDD is checkpointed or not.
 RDD<T> persist()
          Persist this RDD with the default storage level (`MEMORY_ONLY`).
 RDD<T> persist(StorageLevel newLevel)
          Set this RDD's storage level to persist its values across operations after the first time it is computed.
 RDD<String> pipe(scala.collection.Seq<String> command, scala.collection.Map<String,String> env, scala.Function1<scala.Function1<String,scala.runtime.BoxedUnit>,scala.runtime.BoxedUnit> printPipeContext, scala.Function2<T,scala.Function1<String,scala.runtime.BoxedUnit>,scala.runtime.BoxedUnit> printRDDElement, boolean separateWorkingDir)
          Return an RDD created by piping elements to a forked external process.
 RDD<String> pipe(String command)
          Return an RDD created by piping elements to a forked external process.
 RDD<String> pipe(String command, scala.collection.Map<String,String> env)
          Return an RDD created by piping elements to a forked external process.
 scala.collection.Seq<String> preferredLocations(Partition split)
          Get the preferred locations of a partition, taking into account whether the RDD is checkpointed.
 RDD<T>[] randomSplit(double[] weights, long seed)
          Randomly splits this RDD with the provided weights.
static
<T> AsyncRDDActions<T>
rddToAsyncRDDActions(RDD<T> rdd, scala.reflect.ClassTag<T> evidence$36)
           
static
<K,V> OrderedRDDFunctions<K,V,scala.Tuple2<K,V>>
rddToOrderedRDDFunctions(RDD<scala.Tuple2<K,V>> rdd, scala.math.Ordering<K> evidence$37, scala.reflect.ClassTag<K> evidence$38, scala.reflect.ClassTag<V> evidence$39)
           
static
<K,V> PairRDDFunctions<K,V>
rddToPairRDDFunctions(RDD<scala.Tuple2<K,V>> rdd, scala.reflect.ClassTag<K> kt, scala.reflect.ClassTag<V> vt, scala.math.Ordering<K> ord)
           
static
<K,V> SequenceFileRDDFunctions<K,V>
rddToSequenceFileRDDFunctions(RDD<scala.Tuple2<K,V>> rdd, scala.reflect.ClassTag<K> kt, scala.reflect.ClassTag<V> vt,  keyWritableFactory,  valueWritableFactory)
           
 T reduce(scala.Function2<T,T,T> f)
          Reduces the elements of this RDD using the specified commutative and associative binary operator.
 RDD<T> repartition(int numPartitions, scala.math.Ordering<T> ord)
          Return a new RDD that has exactly numPartitions partitions.
 RDD<T> sample(boolean withReplacement, double fraction, long seed)
          Return a sampled subset of this RDD.
 void saveAsObjectFile(String path)
          Save this RDD as a SequenceFile of serialized objects.
 void saveAsTextFile(String path)
          Save this RDD as a text file, using string representations of elements.
 void saveAsTextFile(String path, Class<? extends org.apache.hadoop.io.compress.CompressionCodec> codec)
          Save this RDD as a compressed text file, using string representations of elements.
 scala.Option<org.apache.spark.rdd.RDDOperationScope> scope()
          The scope associated with the operation that created this RDD.
 RDD<T> setName(String _name)
          Assign a name to this RDD
<K> RDD<T>
sortBy(scala.Function1<T,K> f, boolean ascending, int numPartitions, scala.math.Ordering<K> ord, scala.reflect.ClassTag<K> ctag)
          Return this RDD sorted by the given key function.
 SparkContext sparkContext()
          The SparkContext that created this RDD.
 RDD<T> subtract(RDD<T> other)
          Return an RDD with the elements from this that are not in other.
 RDD<T> subtract(RDD<T> other, int numPartitions)
          Return an RDD with the elements from this that are not in other.
 RDD<T> subtract(RDD<T> other, Partitioner p, scala.math.Ordering<T> ord)
          Return an RDD with the elements from this that are not in other.
 Object take(int num)
          Take the first num elements of the RDD.
 Object takeOrdered(int num, scala.math.Ordering<T> ord)
          Returns the first k (smallest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering.
 Object takeSample(boolean withReplacement, int num, long seed)
          Return a fixed-size sampled subset of this RDD in an array
 Object toArray()
          Return an array that contains all of the elements in this RDD.
 String toDebugString()
          A description of this RDD and its recursive dependencies for debugging.
 JavaRDD<T> toJavaRDD()
           
 scala.collection.Iterator<T> toLocalIterator()
          Return an iterator that contains all of the elements in this RDD.
 Object top(int num, scala.math.Ordering<T> ord)
           
 String toString()
           
<U> U
treeAggregate(U zeroValue, scala.Function2<U,T,U> seqOp, scala.Function2<U,U,U> combOp, int depth, scala.reflect.ClassTag<U> evidence$33)
          Aggregates the elements of this RDD in a multi-level tree pattern.
 T treeReduce(scala.Function2<T,T,T> f, int depth)
          Reduces the elements of this RDD in a multi-level tree pattern.
 RDD<T> union(RDD<T> other)
          Return the union of this RDD and another one.
 RDD<T> unpersist(boolean blocking)
          Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
<U> RDD<scala.Tuple2<T,U>>
zip(RDD<U> other, scala.reflect.ClassTag<U> evidence$12)
          Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc.
<B,V> RDD<V>
zipPartitions(RDD<B> rdd2, boolean preservesPartitioning, scala.Function2<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$13, scala.reflect.ClassTag<V> evidence$14)
          Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD by applying a function to the zipped partitions.
<B,V> RDD<V>
zipPartitions(RDD<B> rdd2, scala.Function2<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$15, scala.reflect.ClassTag<V> evidence$16)
           
<B,C,V> RDD<V>
zipPartitions(RDD<B> rdd2, RDD<C> rdd3, boolean preservesPartitioning, scala.Function3<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$17, scala.reflect.ClassTag<C> evidence$18, scala.reflect.ClassTag<V> evidence$19)
           
<B,C,V> RDD<V>
zipPartitions(RDD<B> rdd2, RDD<C> rdd3, scala.Function3<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$20, scala.reflect.ClassTag<C> evidence$21, scala.reflect.ClassTag<V> evidence$22)
           
<B,C,D,V> RDD<V>
zipPartitions(RDD<B> rdd2, RDD<C> rdd3, RDD<D> rdd4, boolean preservesPartitioning, scala.Function4<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<D>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$23, scala.reflect.ClassTag<C> evidence$24, scala.reflect.ClassTag<D> evidence$25, scala.reflect.ClassTag<V> evidence$26)
           
<B,C,D,V> RDD<V>
zipPartitions(RDD<B> rdd2, RDD<C> rdd3, RDD<D> rdd4, scala.Function4<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<D>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$27, scala.reflect.ClassTag<C> evidence$28, scala.reflect.ClassTag<D> evidence$29, scala.reflect.ClassTag<V> evidence$30)
           
 RDD<scala.Tuple2<T,Object>> zipWithIndex()
          Zips this RDD with its element indices.
 RDD<scala.Tuple2<T,Object>> zipWithUniqueId()
          Zips this RDD with generated unique Long ids.
 
Methods inherited from class Object
equals, getClass, hashCode, notify, notifyAll, wait, wait, wait
 
Methods inherited from interface org.apache.spark.Logging
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
 

Constructor Detail

RDD

public RDD(SparkContext _sc,
           scala.collection.Seq<Dependency<?>> deps,
           scala.reflect.ClassTag<T> evidence$1)

RDD

public RDD(RDD<?> oneParent,
           scala.reflect.ClassTag<T> evidence$2)
Construct an RDD with just a one-to-one dependency on one parent

Method Detail

rddToPairRDDFunctions

public static <K,V> PairRDDFunctions<K,V> rddToPairRDDFunctions(RDD<scala.Tuple2<K,V>> rdd,
                                                                scala.reflect.ClassTag<K> kt,
                                                                scala.reflect.ClassTag<V> vt,
                                                                scala.math.Ordering<K> ord)

rddToAsyncRDDActions

public static <T> AsyncRDDActions<T> rddToAsyncRDDActions(RDD<T> rdd,
                                                          scala.reflect.ClassTag<T> evidence$36)

rddToSequenceFileRDDFunctions

public static <K,V> SequenceFileRDDFunctions<K,V> rddToSequenceFileRDDFunctions(RDD<scala.Tuple2<K,V>> rdd,
                                                                                scala.reflect.ClassTag<K> kt,
                                                                                scala.reflect.ClassTag<V> vt,
                                                                                 keyWritableFactory,
                                                                                 valueWritableFactory)

rddToOrderedRDDFunctions

public static <K,V> OrderedRDDFunctions<K,V,scala.Tuple2<K,V>> rddToOrderedRDDFunctions(RDD<scala.Tuple2<K,V>> rdd,
                                                                                        scala.math.Ordering<K> evidence$37,
                                                                                        scala.reflect.ClassTag<K> evidence$38,
                                                                                        scala.reflect.ClassTag<V> evidence$39)

doubleRDDToDoubleRDDFunctions

public static DoubleRDDFunctions doubleRDDToDoubleRDDFunctions(RDD<Object> rdd)

numericRDDToDoubleRDDFunctions

public static <T> DoubleRDDFunctions numericRDDToDoubleRDDFunctions(RDD<T> rdd,
                                                                    scala.math.Numeric<T> num)

compute

public abstract scala.collection.Iterator<T> compute(Partition split,
                                                     TaskContext context)
:: DeveloperApi :: Implemented by subclasses to compute a given partition.

Parameters:
split - (undocumented)
context - (undocumented)
Returns:
(undocumented)

partitioner

public scala.Option<Partitioner> partitioner()
Optionally overridden by subclasses to specify how they are partitioned.


sparkContext

public SparkContext sparkContext()
The SparkContext that created this RDD.


id

public int id()
A unique ID for this RDD (within its SparkContext).


name

public String name()
A friendly name for this RDD


setName

public RDD<T> setName(String _name)
Assign a name to this RDD


persist

public RDD<T> persist(StorageLevel newLevel)
Set this RDD's storage level to persist its values across operations after the first time it is computed. This can only be used to assign a new storage level if the RDD does not have a storage level set yet..

Parameters:
newLevel - (undocumented)
Returns:
(undocumented)

persist

public RDD<T> persist()
Persist this RDD with the default storage level (`MEMORY_ONLY`).


cache

public RDD<T> cache()
Persist this RDD with the default storage level (`MEMORY_ONLY`).


unpersist

public RDD<T> unpersist(boolean blocking)
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.

Parameters:
blocking - Whether to block until all blocks are deleted.
Returns:
This RDD.

getStorageLevel

public StorageLevel getStorageLevel()
Get the RDD's current storage level, or StorageLevel.NONE if none is set.


dependencies

public final scala.collection.Seq<Dependency<?>> dependencies()
Get the list of dependencies of this RDD, taking into account whether the RDD is checkpointed or not.

Returns:
(undocumented)

partitions

public final Partition[] partitions()
Get the array of partitions of this RDD, taking into account whether the RDD is checkpointed or not.

Returns:
(undocumented)

preferredLocations

public final scala.collection.Seq<String> preferredLocations(Partition split)
Get the preferred locations of a partition, taking into account whether the RDD is checkpointed.

Parameters:
split - (undocumented)
Returns:
(undocumented)

iterator

public final scala.collection.Iterator<T> iterator(Partition split,
                                                   TaskContext context)
Internal method to this RDD; will read from cache if applicable, or otherwise compute it. This should ''not'' be called by users directly, but is available for implementors of custom subclasses of RDD.

Parameters:
split - (undocumented)
context - (undocumented)
Returns:
(undocumented)

map

public <U> RDD<U> map(scala.Function1<T,U> f,
                      scala.reflect.ClassTag<U> evidence$3)
Return a new RDD by applying a function to all elements of this RDD.

Parameters:
f - (undocumented)
evidence$3 - (undocumented)
Returns:
(undocumented)

flatMap

public <U> RDD<U> flatMap(scala.Function1<T,scala.collection.TraversableOnce<U>> f,
                          scala.reflect.ClassTag<U> evidence$4)
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.

Parameters:
f - (undocumented)
evidence$4 - (undocumented)
Returns:
(undocumented)

filter

public RDD<T> filter(scala.Function1<T,Object> f)
Return a new RDD containing only the elements that satisfy a predicate.

Parameters:
f - (undocumented)
Returns:
(undocumented)

distinct

public RDD<T> distinct(int numPartitions,
                       scala.math.Ordering<T> ord)
Return a new RDD containing the distinct elements in this RDD.

Parameters:
numPartitions - (undocumented)
ord - (undocumented)
Returns:
(undocumented)

distinct

public RDD<T> distinct()
Return a new RDD containing the distinct elements in this RDD.

Returns:
(undocumented)

repartition

public RDD<T> repartition(int numPartitions,
                          scala.math.Ordering<T> ord)
Return a new RDD that has exactly numPartitions partitions.

Can increase or decrease the level of parallelism in this RDD. Internally, this uses a shuffle to redistribute data.

If you are decreasing the number of partitions in this RDD, consider using coalesce, which can avoid performing a shuffle.

Parameters:
numPartitions - (undocumented)
ord - (undocumented)
Returns:
(undocumented)

coalesce

public RDD<T> coalesce(int numPartitions,
                       boolean shuffle,
                       scala.math.Ordering<T> ord)
Return a new RDD that is reduced into numPartitions partitions.

This results in a narrow dependency, e.g. if you go from 1000 partitions to 100 partitions, there will not be a shuffle, instead each of the 100 new partitions will claim 10 of the current partitions.

However, if you're doing a drastic coalesce, e.g. to numPartitions = 1, this may result in your computation taking place on fewer nodes than you like (e.g. one node in the case of numPartitions = 1). To avoid this, you can pass shuffle = true. This will add a shuffle step, but means the current upstream partitions will be executed in parallel (per whatever the current partitioning is).

Note: With shuffle = true, you can actually coalesce to a larger number of partitions. This is useful if you have a small number of partitions, say 100, potentially with a few partitions being abnormally large. Calling coalesce(1000, shuffle = true) will result in 1000 partitions with the data distributed using a hash partitioner.

Parameters:
numPartitions - (undocumented)
shuffle - (undocumented)
ord - (undocumented)
Returns:
(undocumented)

sample

public RDD<T> sample(boolean withReplacement,
                     double fraction,
                     long seed)
Return a sampled subset of this RDD.

Parameters:
withReplacement - can elements be sampled multiple times (replaced when sampled out)
fraction - expected size of the sample as a fraction of this RDD's size without replacement: probability that each element is chosen; fraction must be [0, 1] with replacement: expected number of times each element is chosen; fraction must be >= 0
seed - seed for the random number generator
Returns:
(undocumented)

randomSplit

public RDD<T>[] randomSplit(double[] weights,
                            long seed)
Randomly splits this RDD with the provided weights.

Parameters:
weights - weights for splits, will be normalized if they don't sum to 1
seed - random seed

Returns:
split RDDs in an array

takeSample

public Object takeSample(boolean withReplacement,
                         int num,
                         long seed)
Return a fixed-size sampled subset of this RDD in an array

Parameters:
withReplacement - whether sampling is done with replacement
num - size of the returned sample
seed - seed for the random number generator
Returns:
sample of specified size in an array

union

public RDD<T> union(RDD<T> other)
Return the union of this RDD and another one. Any identical elements will appear multiple times (use .distinct() to eliminate them).

Parameters:
other - (undocumented)
Returns:
(undocumented)

sortBy

public <K> RDD<T> sortBy(scala.Function1<T,K> f,
                         boolean ascending,
                         int numPartitions,
                         scala.math.Ordering<K> ord,
                         scala.reflect.ClassTag<K> ctag)
Return this RDD sorted by the given key function.

Parameters:
f - (undocumented)
ascending - (undocumented)
numPartitions - (undocumented)
ord - (undocumented)
ctag - (undocumented)
Returns:
(undocumented)

intersection

public RDD<T> intersection(RDD<T> other)
Return the intersection of this RDD and another one. The output will not contain any duplicate elements, even if the input RDDs did.

Note that this method performs a shuffle internally.

Parameters:
other - (undocumented)
Returns:
(undocumented)

intersection

public RDD<T> intersection(RDD<T> other,
                           Partitioner partitioner,
                           scala.math.Ordering<T> ord)
Return the intersection of this RDD and another one. The output will not contain any duplicate elements, even if the input RDDs did.

Note that this method performs a shuffle internally.

Parameters:
partitioner - Partitioner to use for the resulting RDD
other - (undocumented)
ord - (undocumented)
Returns:
(undocumented)

intersection

public RDD<T> intersection(RDD<T> other,
                           int numPartitions)
Return the intersection of this RDD and another one. The output will not contain any duplicate elements, even if the input RDDs did. Performs a hash partition across the cluster

Note that this method performs a shuffle internally.

Parameters:
numPartitions - How many partitions to use in the resulting RDD
other - (undocumented)
Returns:
(undocumented)

glom

public RDD<Object> glom()
Return an RDD created by coalescing all elements within each partition into an array.

Returns:
(undocumented)

cartesian

public <U> RDD<scala.Tuple2<T,U>> cartesian(RDD<U> other,
                                            scala.reflect.ClassTag<U> evidence$5)
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in this and b is in other.

Parameters:
other - (undocumented)
evidence$5 - (undocumented)
Returns:
(undocumented)

groupBy

public <K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>> groupBy(scala.Function1<T,K> f,
                                                                     scala.reflect.ClassTag<K> kt)
Return an RDD of grouped items. Each group consists of a key and a sequence of elements mapping to that key. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.

Note: This operation may be very expensive. If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using PairRDDFunctions.aggregateByKey or PairRDDFunctions.reduceByKey will provide much better performance.

Parameters:
f - (undocumented)
kt - (undocumented)
Returns:
(undocumented)

groupBy

public <K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>> groupBy(scala.Function1<T,K> f,
                                                                     int numPartitions,
                                                                     scala.reflect.ClassTag<K> kt)
Return an RDD of grouped elements. Each group consists of a key and a sequence of elements mapping to that key. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.

Note: This operation may be very expensive. If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using PairRDDFunctions.aggregateByKey or PairRDDFunctions.reduceByKey will provide much better performance.

Parameters:
f - (undocumented)
numPartitions - (undocumented)
kt - (undocumented)
Returns:
(undocumented)

groupBy

public <K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>> groupBy(scala.Function1<T,K> f,
                                                                     Partitioner p,
                                                                     scala.reflect.ClassTag<K> kt,
                                                                     scala.math.Ordering<K> ord)
Return an RDD of grouped items. Each group consists of a key and a sequence of elements mapping to that key. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.

Note: This operation may be very expensive. If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using PairRDDFunctions.aggregateByKey or PairRDDFunctions.reduceByKey will provide much better performance.

Parameters:
f - (undocumented)
p - (undocumented)
kt - (undocumented)
ord - (undocumented)
Returns:
(undocumented)

pipe

public RDD<String> pipe(String command)
Return an RDD created by piping elements to a forked external process.

Parameters:
command - (undocumented)
Returns:
(undocumented)

pipe

public RDD<String> pipe(String command,
                        scala.collection.Map<String,String> env)
Return an RDD created by piping elements to a forked external process.

Parameters:
command - (undocumented)
env - (undocumented)
Returns:
(undocumented)

pipe

public RDD<String> pipe(scala.collection.Seq<String> command,
                        scala.collection.Map<String,String> env,
                        scala.Function1<scala.Function1<String,scala.runtime.BoxedUnit>,scala.runtime.BoxedUnit> printPipeContext,
                        scala.Function2<T,scala.Function1<String,scala.runtime.BoxedUnit>,scala.runtime.BoxedUnit> printRDDElement,
                        boolean separateWorkingDir)
Return an RDD created by piping elements to a forked external process. The print behavior can be customized by providing two functions.

Parameters:
command - command to run in forked process.
env - environment variables to set.
printPipeContext - Before piping elements, this function is called as an opportunity to pipe context data. Print line function (like out.println) will be passed as printPipeContext's parameter.
printRDDElement - Use this function to customize how to pipe elements. This function will be called with each RDD element as the 1st parameter, and the print line function (like out.println()) as the 2nd parameter. An example of pipe the RDD data of groupBy() in a streaming way, instead of constructing a huge String to concat all the elements: def printRDDElement(record:(String, Seq[String]), f:String=&gt;Unit) = for (e &lt;- record._2){f(e)}
separateWorkingDir - Use separate working directories for each task.
Returns:
the result RDD

mapPartitions

public <U> RDD<U> mapPartitions(scala.Function1<scala.collection.Iterator<T>,scala.collection.Iterator<U>> f,
                                boolean preservesPartitioning,
                                scala.reflect.ClassTag<U> evidence$6)
Return a new RDD by applying a function to each partition of this RDD.

preservesPartitioning indicates whether the input function preserves the partitioner, which should be false unless this is a pair RDD and the input function doesn't modify the keys.

Parameters:
f - (undocumented)
preservesPartitioning - (undocumented)
evidence$6 - (undocumented)
Returns:
(undocumented)

mapPartitionsWithIndex

public <U> RDD<U> mapPartitionsWithIndex(scala.Function2<Object,scala.collection.Iterator<T>,scala.collection.Iterator<U>> f,
                                         boolean preservesPartitioning,
                                         scala.reflect.ClassTag<U> evidence$7)
Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.

preservesPartitioning indicates whether the input function preserves the partitioner, which should be false unless this is a pair RDD and the input function doesn't modify the keys.

Parameters:
f - (undocumented)
preservesPartitioning - (undocumented)
evidence$7 - (undocumented)
Returns:
(undocumented)

mapPartitionsWithContext

public <U> RDD<U> mapPartitionsWithContext(scala.Function2<TaskContext,scala.collection.Iterator<T>,scala.collection.Iterator<U>> f,
                                           boolean preservesPartitioning,
                                           scala.reflect.ClassTag<U> evidence$8)
:: DeveloperApi :: Return a new RDD by applying a function to each partition of this RDD. This is a variant of mapPartitions that also passes the TaskContext into the closure.

preservesPartitioning indicates whether the input function preserves the partitioner, which should be false unless this is a pair RDD and the input function doesn't modify the keys.

Parameters:
f - (undocumented)
preservesPartitioning - (undocumented)
evidence$8 - (undocumented)
Returns:
(undocumented)

mapPartitionsWithSplit

public <U> RDD<U> mapPartitionsWithSplit(scala.Function2<Object,scala.collection.Iterator<T>,scala.collection.Iterator<U>> f,
                                         boolean preservesPartitioning,
                                         scala.reflect.ClassTag<U> evidence$9)
Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.

Parameters:
f - (undocumented)
preservesPartitioning - (undocumented)
evidence$9 - (undocumented)
Returns:
(undocumented)

mapWith

public <A,U> RDD<U> mapWith(scala.Function1<Object,A> constructA,
                            boolean preservesPartitioning,
                            scala.Function2<T,A,U> f,
                            scala.reflect.ClassTag<U> evidence$10)
Maps f over this RDD, where f takes an additional parameter of type A. This additional parameter is produced by constructA, which is called in each partition with the index of that partition.

Parameters:
constructA - (undocumented)
preservesPartitioning - (undocumented)
f - (undocumented)
evidence$10 - (undocumented)
Returns:
(undocumented)

flatMapWith

public <A,U> RDD<U> flatMapWith(scala.Function1<Object,A> constructA,
                                boolean preservesPartitioning,
                                scala.Function2<T,A,scala.collection.Seq<U>> f,
                                scala.reflect.ClassTag<U> evidence$11)
FlatMaps f over this RDD, where f takes an additional parameter of type A. This additional parameter is produced by constructA, which is called in each partition with the index of that partition.

Parameters:
constructA - (undocumented)
preservesPartitioning - (undocumented)
f - (undocumented)
evidence$11 - (undocumented)
Returns:
(undocumented)

foreachWith

public <A> void foreachWith(scala.Function1<Object,A> constructA,
                            scala.Function2<T,A,scala.runtime.BoxedUnit> f)
Applies f to each element of this RDD, where f takes an additional parameter of type A. This additional parameter is produced by constructA, which is called in each partition with the index of that partition.

Parameters:
constructA - (undocumented)
f - (undocumented)

filterWith

public <A> RDD<T> filterWith(scala.Function1<Object,A> constructA,
                             scala.Function2<T,A,Object> p)
Filters this RDD with p, where p takes an additional parameter of type A. This additional parameter is produced by constructA, which is called in each partition with the index of that partition.

Parameters:
constructA - (undocumented)
p - (undocumented)
Returns:
(undocumented)

zip

public <U> RDD<scala.Tuple2<T,U>> zip(RDD<U> other,
                                      scala.reflect.ClassTag<U> evidence$12)
Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc. Assumes that the two RDDs have the *same number of partitions* and the *same number of elements in each partition* (e.g. one was made through a map on the other).

Parameters:
other - (undocumented)
evidence$12 - (undocumented)
Returns:
(undocumented)

zipPartitions

public <B,V> RDD<V> zipPartitions(RDD<B> rdd2,
                                  boolean preservesPartitioning,
                                  scala.Function2<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<V>> f,
                                  scala.reflect.ClassTag<B> evidence$13,
                                  scala.reflect.ClassTag<V> evidence$14)
Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD by applying a function to the zipped partitions. Assumes that all the RDDs have the *same number of partitions*, but does *not* require them to have the same number of elements in each partition.

Parameters:
rdd2 - (undocumented)
preservesPartitioning - (undocumented)
f - (undocumented)
evidence$13 - (undocumented)
evidence$14 - (undocumented)
Returns:
(undocumented)

zipPartitions

public <B,V> RDD<V> zipPartitions(RDD<B> rdd2,
                                  scala.Function2<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<V>> f,
                                  scala.reflect.ClassTag<B> evidence$15,
                                  scala.reflect.ClassTag<V> evidence$16)

zipPartitions

public <B,C,V> RDD<V> zipPartitions(RDD<B> rdd2,
                                    RDD<C> rdd3,
                                    boolean preservesPartitioning,
                                    scala.Function3<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<V>> f,
                                    scala.reflect.ClassTag<B> evidence$17,
                                    scala.reflect.ClassTag<C> evidence$18,
                                    scala.reflect.ClassTag<V> evidence$19)

zipPartitions

public <B,C,V> RDD<V> zipPartitions(RDD<B> rdd2,
                                    RDD<C> rdd3,
                                    scala.Function3<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<V>> f,
                                    scala.reflect.ClassTag<B> evidence$20,
                                    scala.reflect.ClassTag<C> evidence$21,
                                    scala.reflect.ClassTag<V> evidence$22)

zipPartitions

public <B,C,D,V> RDD<V> zipPartitions(RDD<B> rdd2,
                                      RDD<C> rdd3,
                                      RDD<D> rdd4,
                                      boolean preservesPartitioning,
                                      scala.Function4<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<D>,scala.collection.Iterator<V>> f,
                                      scala.reflect.ClassTag<B> evidence$23,
                                      scala.reflect.ClassTag<C> evidence$24,
                                      scala.reflect.ClassTag<D> evidence$25,
                                      scala.reflect.ClassTag<V> evidence$26)

zipPartitions

public <B,C,D,V> RDD<V> zipPartitions(RDD<B> rdd2,
                                      RDD<C> rdd3,
                                      RDD<D> rdd4,
                                      scala.Function4<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<D>,scala.collection.Iterator<V>> f,
                                      scala.reflect.ClassTag<B> evidence$27,
                                      scala.reflect.ClassTag<C> evidence$28,
                                      scala.reflect.ClassTag<D> evidence$29,
                                      scala.reflect.ClassTag<V> evidence$30)

foreach

public void foreach(scala.Function1<T,scala.runtime.BoxedUnit> f)
Applies a function f to all elements of this RDD.

Parameters:
f - (undocumented)

foreachPartition

public void foreachPartition(scala.Function1<scala.collection.Iterator<T>,scala.runtime.BoxedUnit> f)
Applies a function f to each partition of this RDD.

Parameters:
f - (undocumented)

collect

public Object collect()
Return an array that contains all of the elements in this RDD.

Returns:
(undocumented)

toLocalIterator

public scala.collection.Iterator<T> toLocalIterator()
Return an iterator that contains all of the elements in this RDD.

The iterator will consume as much memory as the largest partition in this RDD.

Note: this results in multiple Spark jobs, and if the input RDD is the result of a wide transformation (e.g. join with different partitioners), to avoid recomputing the input RDD should be cached first.

Returns:
(undocumented)

toArray

public Object toArray()
Return an array that contains all of the elements in this RDD.

Returns:
(undocumented)

collect

public <U> RDD<U> collect(scala.PartialFunction<T,U> f,
                          scala.reflect.ClassTag<U> evidence$31)
Return an RDD that contains all matching values by applying f.

Parameters:
f - (undocumented)
evidence$31 - (undocumented)
Returns:
(undocumented)

subtract

public RDD<T> subtract(RDD<T> other)
Return an RDD with the elements from this that are not in other.

Uses this partitioner/partition size, because even if other is huge, the resulting RDD will be &lt;= us.

Parameters:
other - (undocumented)
Returns:
(undocumented)

subtract

public RDD<T> subtract(RDD<T> other,
                       int numPartitions)
Return an RDD with the elements from this that are not in other.

Parameters:
other - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

subtract

public RDD<T> subtract(RDD<T> other,
                       Partitioner p,
                       scala.math.Ordering<T> ord)
Return an RDD with the elements from this that are not in other.

Parameters:
other - (undocumented)
p - (undocumented)
ord - (undocumented)
Returns:
(undocumented)

reduce

public T reduce(scala.Function2<T,T,T> f)
Reduces the elements of this RDD using the specified commutative and associative binary operator.

Parameters:
f - (undocumented)
Returns:
(undocumented)

treeReduce

public T treeReduce(scala.Function2<T,T,T> f,
                    int depth)
Reduces the elements of this RDD in a multi-level tree pattern.

Parameters:
depth - suggested depth of the tree (default: 2)
f - (undocumented)
Returns:
(undocumented)
See Also:
reduce(scala.Function2)

fold

public T fold(T zeroValue,
              scala.Function2<T,T,T> op)
Aggregate the elements of each partition, and then the results for all the partitions, using a given associative and commutative function and a neutral "zero value". The function op(t1, t2) is allowed to modify t1 and return it as its result value to avoid object allocation; however, it should not modify t2.

This behaves somewhat differently from fold operations implemented for non-distributed collections in functional languages like Scala. This fold operation may be applied to partitions individually, and then fold those results into the final result, rather than apply the fold to each element sequentially in some defined ordering. For functions that are not commutative, the result may differ from that of a fold applied to a non-distributed collection.

Parameters:
zeroValue - (undocumented)
op - (undocumented)
Returns:
(undocumented)

aggregate

public <U> U aggregate(U zeroValue,
                       scala.Function2<U,T,U> seqOp,
                       scala.Function2<U,U,U> combOp,
                       scala.reflect.ClassTag<U> evidence$32)
Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of this RDD, T. Thus, we need one operation for merging a T into an U and one operation for merging two U's, as in scala.TraversableOnce. Both of these functions are allowed to modify and return their first argument instead of creating a new U to avoid memory allocation.

Parameters:
zeroValue - (undocumented)
seqOp - (undocumented)
combOp - (undocumented)
evidence$32 - (undocumented)
Returns:
(undocumented)

treeAggregate

public <U> U treeAggregate(U zeroValue,
                           scala.Function2<U,T,U> seqOp,
                           scala.Function2<U,U,U> combOp,
                           int depth,
                           scala.reflect.ClassTag<U> evidence$33)
Aggregates the elements of this RDD in a multi-level tree pattern.

Parameters:
depth - suggested depth of the tree (default: 2)
zeroValue - (undocumented)
seqOp - (undocumented)
combOp - (undocumented)
evidence$33 - (undocumented)
Returns:
(undocumented)
See Also:
aggregate(U, scala.Function2, scala.Function2, scala.reflect.ClassTag)

count

public long count()
Return the number of elements in the RDD.

Returns:
(undocumented)

countApprox

public PartialResult<BoundedDouble> countApprox(long timeout,
                                                double confidence)
:: Experimental :: Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.

Parameters:
timeout - (undocumented)
confidence - (undocumented)
Returns:
(undocumented)

countByValue

public scala.collection.Map<T,Object> countByValue(scala.math.Ordering<T> ord)
Return the count of each unique value in this RDD as a local map of (value, count) pairs.

Note that this method should only be used if the resulting map is expected to be small, as the whole thing is loaded into the driver's memory. To handle very large results, consider using rdd.map(x =&gt; (x, 1L)).reduceByKey(_ + _), which returns an RDD[T, Long] instead of a map.

Parameters:
ord - (undocumented)
Returns:
(undocumented)

countByValueApprox

public PartialResult<scala.collection.Map<T,BoundedDouble>> countByValueApprox(long timeout,
                                                                               double confidence,
                                                                               scala.math.Ordering<T> ord)
:: Experimental :: Approximate version of countByValue().

Parameters:
timeout - (undocumented)
confidence - (undocumented)
ord - (undocumented)
Returns:
(undocumented)

countApproxDistinct

public long countApproxDistinct(int p,
                                int sp)
:: Experimental :: Return approximate number of distinct elements in the RDD.

The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.

The relative accuracy is approximately 1.054 / sqrt(2^p). Setting a nonzero sp &gt; p would trigger sparse representation of registers, which may reduce the memory consumption and increase accuracy when the cardinality is small.

Parameters:
p - The precision value for the normal set. p must be a value between 4 and sp if sp is not zero (32 max).
sp - The precision value for the sparse set, between 0 and 32. If sp equals 0, the sparse representation is skipped.
Returns:
(undocumented)

countApproxDistinct

public long countApproxDistinct(double relativeSD)
Return approximate number of distinct elements in the RDD.

The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.

Parameters:
relativeSD - Relative accuracy. Smaller values create counters that require more space. It must be greater than 0.000017.
Returns:
(undocumented)

zipWithIndex

public RDD<scala.Tuple2<T,Object>> zipWithIndex()
Zips this RDD with its element indices. The ordering is first based on the partition index and then the ordering of items within each partition. So the first item in the first partition gets index 0, and the last item in the last partition receives the largest index.

This is similar to Scala's zipWithIndex but it uses Long instead of Int as the index type. This method needs to trigger a spark job when this RDD contains more than one partitions.

Note that some RDDs, such as those returned by groupBy(), do not guarantee order of elements in a partition. The index assigned to each element is therefore not guaranteed, and may even change if the RDD is reevaluated. If a fixed ordering is required to guarantee the same index assignments, you should sort the RDD with sortByKey() or save it to a file.

Returns:
(undocumented)

zipWithUniqueId

public RDD<scala.Tuple2<T,Object>> zipWithUniqueId()
Zips this RDD with generated unique Long ids. Items in the kth partition will get ids k, n+k, 2*n+k, ..., where n is the number of partitions. So there may exist gaps, but this method won't trigger a spark job, which is different from zipWithIndex().

Note that some RDDs, such as those returned by groupBy(), do not guarantee order of elements in a partition. The unique ID assigned to each element is therefore not guaranteed, and may even change if the RDD is reevaluated. If a fixed ordering is required to guarantee the same index assignments, you should sort the RDD with sortByKey() or save it to a file.

Returns:
(undocumented)

take

public Object take(int num)
Take the first num elements of the RDD. It works by first scanning one partition, and use the results from that partition to estimate the number of additional partitions needed to satisfy the limit.

Parameters:
num - (undocumented)
Returns:
(undocumented)

first

public T first()
Return the first element in this RDD.

Returns:
(undocumented)

top

public Object top(int num,
                  scala.math.Ordering<T> ord)

takeOrdered

public Object takeOrdered(int num,
                          scala.math.Ordering<T> ord)
Returns the first k (smallest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering. This does the opposite of top. For example:

   sc.parallelize(Seq(10, 4, 2, 12, 3)).takeOrdered(1)
   // returns Array(2)

   sc.parallelize(Seq(2, 3, 4, 5, 6)).takeOrdered(2)
   // returns Array(2, 3)
 

Parameters:
num - k, the number of elements to return
ord - the implicit ordering for T
Returns:
an array of top elements

max

public T max(scala.math.Ordering<T> ord)
Returns the max of this RDD as defined by the implicit Ordering[T].

Parameters:
ord - (undocumented)
Returns:
the maximum element of the RDD

min

public T min(scala.math.Ordering<T> ord)
Returns the min of this RDD as defined by the implicit Ordering[T].

Parameters:
ord - (undocumented)
Returns:
the minimum element of the RDD

isEmpty

public boolean isEmpty()
Returns:
true if and only if the RDD contains no elements at all. Note that an RDD may be empty even when it has at least 1 partition.

saveAsTextFile

public void saveAsTextFile(String path)
Save this RDD as a text file, using string representations of elements.

Parameters:
path - (undocumented)

saveAsTextFile

public void saveAsTextFile(String path,
                           Class<? extends org.apache.hadoop.io.compress.CompressionCodec> codec)
Save this RDD as a compressed text file, using string representations of elements.

Parameters:
path - (undocumented)
codec - (undocumented)

saveAsObjectFile

public void saveAsObjectFile(String path)
Save this RDD as a SequenceFile of serialized objects.

Parameters:
path - (undocumented)

keyBy

public <K> RDD<scala.Tuple2<K,T>> keyBy(scala.Function1<T,K> f)
Creates tuples of the elements in this RDD by applying f.

Parameters:
f - (undocumented)
Returns:
(undocumented)

checkpoint

public void checkpoint()
Mark this RDD for checkpointing. It will be saved to a file inside the checkpoint directory set with SparkContext.setCheckpointDir() and all references to its parent RDDs will be removed. This function must be called before any job has been executed on this RDD. It is strongly recommended that this RDD is persisted in memory, otherwise saving it on a file will require recomputation.


isCheckpointed

public boolean isCheckpointed()
Return whether this RDD has been checkpointed or not

Returns:
(undocumented)

getCheckpointFile

public scala.Option<String> getCheckpointFile()
Gets the name of the file to which this RDD was checkpointed

Returns:
(undocumented)

creationSite

public org.apache.spark.util.CallSite creationSite()
User code that created this RDD (e.g. `textFile`, `parallelize`).


scope

public scala.Option<org.apache.spark.rdd.RDDOperationScope> scope()
The scope associated with the operation that created this RDD.

This is more flexible than the call site and can be defined hierarchically. For more detail, see the documentation of {{RDDOperationScope}}. This scope is not defined if the user instantiates this RDD himself without using any Spark operations.

Returns:
(undocumented)

checkpointData

public scala.Option<org.apache.spark.rdd.RDDCheckpointData<T>> checkpointData()

context

public SparkContext context()
The SparkContext that this RDD was created on.


toDebugString

public String toDebugString()
A description of this RDD and its recursive dependencies for debugging.


toString

public String toString()
Overrides:
toString in class Object

toJavaRDD

public JavaRDD<T> toJavaRDD()