Class RDD<T>

Object
org.apache.spark.rdd.RDD<T>
All Implemented Interfaces:
Serializable, org.apache.spark.internal.Logging, scala.Serializable
Direct Known Subclasses:
BaseRRDD, CoGroupedRDD, EdgeRDD, HadoopRDD, JdbcRDD, NewHadoopRDD, PartitionPruningRDD, ShuffledRDD, UnionRDD, VertexRDD

public abstract class RDD<T> extends Object implements scala.Serializable, org.apache.spark.internal.Logging
A Resilient Distributed Dataset (RDD), the basic abstraction in Spark. Represents an immutable, partitioned collection of elements that can be operated on in parallel. This class contains the basic operations available on all RDDs, such as map, filter, and persist. In addition, PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; DoubleRDDFunctions contains operations available only on RDDs of Doubles; and SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. All operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)]) through implicit.

Internally, each RDD is characterized by five main properties:

- A list of partitions - A function for computing each split - A list of dependencies on other RDDs - Optionally, a Partitioner for key-value RDDs (e.g. to say that the RDD is hash-partitioned) - Optionally, a list of preferred locations to compute each split on (e.g. block locations for an HDFS file)

All of the scheduling and execution in Spark is done based on these methods, allowing each RDD to implement its own way of computing itself. Indeed, users can implement custom RDDs (e.g. for reading data from a new storage system) by overriding these functions. Please refer to the Spark paper for more details on RDD internals.

See Also:
  • Nested Class Summary

    Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging

    org.apache.spark.internal.Logging.SparkShellLoggingFilter
  • Constructor Summary

    Constructors
    Constructor
    Description
    RDD(RDD<?> oneParent, scala.reflect.ClassTag<T> evidence$2)
    Construct an RDD with just a one-to-one dependency on one parent
    RDD(SparkContext _sc, scala.collection.Seq<Dependency<?>> deps, scala.reflect.ClassTag<T> evidence$1)
     
  • Method Summary

    Modifier and Type
    Method
    Description
    <U> U
    aggregate(U zeroValue, scala.Function2<U,T,U> seqOp, scala.Function2<U,U,U> combOp, scala.reflect.ClassTag<U> evidence$33)
    Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value".
    :: Experimental :: Marks the current stage as a barrier stage, where Spark must launch all tasks together.
    Persist this RDD with the default storage level (MEMORY_ONLY).
    <U> RDD<scala.Tuple2<T,U>>
    cartesian(RDD<U> other, scala.reflect.ClassTag<U> evidence$5)
    Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in this and b is in other.
    void
    Mark this RDD for checkpointing.
    void
    cleanShuffleDependencies(boolean blocking)
    Removes an RDD's shuffles and it's non-persisted ancestors.
    coalesce(int numPartitions, boolean shuffle, scala.Option<PartitionCoalescer> partitionCoalescer, scala.math.Ordering<T> ord)
    Return a new RDD that is reduced into numPartitions partitions.
    Return an array that contains all of the elements in this RDD.
    <U> RDD<U>
    collect(scala.PartialFunction<T,U> f, scala.reflect.ClassTag<U> evidence$32)
    Return an RDD that contains all matching values by applying f.
    abstract scala.collection.Iterator<T>
    compute(Partition split, TaskContext context)
    :: DeveloperApi :: Implemented by subclasses to compute a given partition.
    The SparkContext that this RDD was created on.
    long
    Return the number of elements in the RDD.
    countApprox(long timeout, double confidence)
    Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.
    long
    countApproxDistinct(double relativeSD)
    Return approximate number of distinct elements in the RDD.
    long
    countApproxDistinct(int p, int sp)
    Return approximate number of distinct elements in the RDD.
    scala.collection.Map<T,Object>
    countByValue(scala.math.Ordering<T> ord)
    Return the count of each unique value in this RDD as a local map of (value, count) pairs.
    PartialResult<scala.collection.Map<T,BoundedDouble>>
    countByValueApprox(long timeout, double confidence, scala.math.Ordering<T> ord)
    Approximate version of countByValue().
    final scala.collection.Seq<Dependency<?>>
    Get the list of dependencies of this RDD, taking into account whether the RDD is checkpointed or not.
    Return a new RDD containing the distinct elements in this RDD.
    distinct(int numPartitions, scala.math.Ordering<T> ord)
    Return a new RDD containing the distinct elements in this RDD.
     
    filter(scala.Function1<T,Object> f)
    Return a new RDD containing only the elements that satisfy a predicate.
    Return the first element in this RDD.
    <U> RDD<U>
    flatMap(scala.Function1<T,scala.collection.TraversableOnce<U>> f, scala.reflect.ClassTag<U> evidence$4)
    Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
    fold(T zeroValue, scala.Function2<T,T,T> op)
    Aggregate the elements of each partition, and then the results for all the partitions, using a given associative function and a neutral "zero value".
    void
    foreach(scala.Function1<T,scala.runtime.BoxedUnit> f)
    Applies a function f to all elements of this RDD.
    void
    foreachPartition(scala.Function1<scala.collection.Iterator<T>,scala.runtime.BoxedUnit> f)
    Applies a function f to each partition of this RDD.
    scala.Option<String>
    Gets the name of the directory to which this RDD was checkpointed.
    final int
    Returns the number of partitions of this RDD.
    Get the ResourceProfile specified with this RDD or null if it wasn't specified.
    Get the RDD's current storage level, or StorageLevel.NONE if none is set.
    Return an RDD created by coalescing all elements within each partition into an array.
    <K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>>
    groupBy(scala.Function1<T,K> f, int numPartitions, scala.reflect.ClassTag<K> kt)
    Return an RDD of grouped elements.
    <K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>>
    groupBy(scala.Function1<T,K> f, Partitioner p, scala.reflect.ClassTag<K> kt, scala.math.Ordering<K> ord)
    Return an RDD of grouped items.
    <K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>>
    groupBy(scala.Function1<T,K> f, scala.reflect.ClassTag<K> kt)
    Return an RDD of grouped items.
    int
    id()
    A unique ID for this RDD (within its SparkContext).
    intersection(RDD<T> other)
    Return the intersection of this RDD and another one.
    intersection(RDD<T> other, int numPartitions)
    Return the intersection of this RDD and another one.
    intersection(RDD<T> other, Partitioner partitioner, scala.math.Ordering<T> ord)
    Return the intersection of this RDD and another one.
    boolean
    Return whether this RDD is checkpointed and materialized, either reliably or locally.
    boolean
     
    final scala.collection.Iterator<T>
    iterator(Partition split, TaskContext context)
    Internal method to this RDD; will read from cache if applicable, or otherwise compute it.
    <K> RDD<scala.Tuple2<K,T>>
    keyBy(scala.Function1<T,K> f)
    Creates tuples of the elements in this RDD by applying f.
    Mark this RDD for local checkpointing using Spark's existing caching layer.
    <U> RDD<U>
    map(scala.Function1<T,U> f, scala.reflect.ClassTag<U> evidence$3)
    Return a new RDD by applying a function to all elements of this RDD.
    <U> RDD<U>
    mapPartitions(scala.Function1<scala.collection.Iterator<T>,scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$6)
    Return a new RDD by applying a function to each partition of this RDD.
    <U> RDD<U>
    mapPartitionsWithEvaluator(PartitionEvaluatorFactory<T,U> evaluatorFactory, scala.reflect.ClassTag<U> evidence$10)
    Return a new RDD by applying an evaluator to each partition of this RDD.
    <U> RDD<U>
    mapPartitionsWithIndex(scala.Function2<Object,scala.collection.Iterator<T>,scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$9)
    Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.
    max(scala.math.Ordering<T> ord)
    Returns the max of this RDD as defined by the implicit Ordering[T].
    min(scala.math.Ordering<T> ord)
    Returns the min of this RDD as defined by the implicit Ordering[T].
    A friendly name for this RDD
    static <T> DoubleRDDFunctions
    numericRDDToDoubleRDDFunctions(RDD<T> rdd, scala.math.Numeric<T> num)
     
    scala.Option<Partitioner>
    Optionally overridden by subclasses to specify how they are partitioned.
    final Partition[]
    Get the array of partitions of this RDD, taking into account whether the RDD is checkpointed or not.
    Persist this RDD with the default storage level (MEMORY_ONLY).
    Set this RDD's storage level to persist its values across operations after the first time it is computed.
    pipe(String command)
    Return an RDD created by piping elements to a forked external process.
    pipe(String command, scala.collection.Map<String,String> env)
    Return an RDD created by piping elements to a forked external process.
    pipe(scala.collection.Seq<String> command, scala.collection.Map<String,String> env, scala.Function1<scala.Function1<String,scala.runtime.BoxedUnit>,scala.runtime.BoxedUnit> printPipeContext, scala.Function2<T,scala.Function1<String,scala.runtime.BoxedUnit>,scala.runtime.BoxedUnit> printRDDElement, boolean separateWorkingDir, int bufferSize, String encoding)
    Return an RDD created by piping elements to a forked external process.
    final scala.collection.Seq<String>
    Get the preferred locations of a partition, taking into account whether the RDD is checkpointed.
    RDD<T>[]
    randomSplit(double[] weights, long seed)
    Randomly splits this RDD with the provided weights.
    static <T> AsyncRDDActions<T>
    rddToAsyncRDDActions(RDD<T> rdd, scala.reflect.ClassTag<T> evidence$38)
     
    static <K, V> OrderedRDDFunctions<K,V,scala.Tuple2<K,V>>
    rddToOrderedRDDFunctions(RDD<scala.Tuple2<K,V>> rdd, scala.math.Ordering<K> evidence$39, scala.reflect.ClassTag<K> evidence$40, scala.reflect.ClassTag<V> evidence$41)
     
    static <K, V> PairRDDFunctions<K,V>
    rddToPairRDDFunctions(RDD<scala.Tuple2<K,V>> rdd, scala.reflect.ClassTag<K> kt, scala.reflect.ClassTag<V> vt, scala.math.Ordering<K> ord)
     
    static <K, V> SequenceFileRDDFunctions<K,V>
    rddToSequenceFileRDDFunctions(RDD<scala.Tuple2<K,V>> rdd, scala.reflect.ClassTag<K> kt, scala.reflect.ClassTag<V> vt, <any> keyWritableFactory, <any> valueWritableFactory)
     
    reduce(scala.Function2<T,T,T> f)
    Reduces the elements of this RDD using the specified commutative and associative binary operator.
    repartition(int numPartitions, scala.math.Ordering<T> ord)
    Return a new RDD that has exactly numPartitions partitions.
    sample(boolean withReplacement, double fraction, long seed)
    Return a sampled subset of this RDD.
    void
    Save this RDD as a SequenceFile of serialized objects.
    void
    Save this RDD as a text file, using string representations of elements.
    void
    saveAsTextFile(String path, Class<? extends org.apache.hadoop.io.compress.CompressionCodec> codec)
    Save this RDD as a compressed text file, using string representations of elements.
    setName(String _name)
    Assign a name to this RDD
    <K> RDD<T>
    sortBy(scala.Function1<T,K> f, boolean ascending, int numPartitions, scala.math.Ordering<K> ord, scala.reflect.ClassTag<K> ctag)
    Return this RDD sorted by the given key function.
    The SparkContext that created this RDD.
    subtract(RDD<T> other)
    Return an RDD with the elements from this that are not in other.
    subtract(RDD<T> other, int numPartitions)
    Return an RDD with the elements from this that are not in other.
    subtract(RDD<T> other, Partitioner p, scala.math.Ordering<T> ord)
    Return an RDD with the elements from this that are not in other.
    take(int num)
    Take the first num elements of the RDD.
    takeOrdered(int num, scala.math.Ordering<T> ord)
    Returns the first k (smallest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering.
    takeSample(boolean withReplacement, int num, long seed)
    Return a fixed-size sampled subset of this RDD in an array
    A description of this RDD and its recursive dependencies for debugging.
     
    scala.collection.Iterator<T>
    Return an iterator that contains all of the elements in this RDD.
    top(int num, scala.math.Ordering<T> ord)
    Returns the top k (largest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering.
     
    <U> U
    treeAggregate(U zeroValue, scala.Function2<U,T,U> seqOp, scala.Function2<U,U,U> combOp, int depth, boolean finalAggregateOnExecutor, scala.reflect.ClassTag<U> evidence$35)
    <U> U
    treeAggregate(U zeroValue, scala.Function2<U,T,U> seqOp, scala.Function2<U,U,U> combOp, int depth, scala.reflect.ClassTag<U> evidence$34)
    Aggregates the elements of this RDD in a multi-level tree pattern.
    treeReduce(scala.Function2<T,T,T> f, int depth)
    Reduces the elements of this RDD in a multi-level tree pattern.
    union(RDD<T> other)
    Return the union of this RDD and another one.
    unpersist(boolean blocking)
    Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
    Specify a ResourceProfile to use when calculating this RDD.
    <U> RDD<scala.Tuple2<T,U>>
    zip(RDD<U> other, scala.reflect.ClassTag<U> evidence$13)
    Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc.
    <B, V> RDD<V>
    zipPartitions(RDD<B> rdd2, boolean preservesPartitioning, scala.Function2<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$14, scala.reflect.ClassTag<V> evidence$15)
    Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD by applying a function to the zipped partitions.
    <B, C, V> RDD<V>
    zipPartitions(RDD<B> rdd2, RDD<C> rdd3, boolean preservesPartitioning, scala.Function3<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$18, scala.reflect.ClassTag<C> evidence$19, scala.reflect.ClassTag<V> evidence$20)
     
    <B, C, D, V> RDD<V>
    zipPartitions(RDD<B> rdd2, RDD<C> rdd3, RDD<D> rdd4, boolean preservesPartitioning, scala.Function4<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<D>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$24, scala.reflect.ClassTag<C> evidence$25, scala.reflect.ClassTag<D> evidence$26, scala.reflect.ClassTag<V> evidence$27)
     
    <B, C, D, V> RDD<V>
    zipPartitions(RDD<B> rdd2, RDD<C> rdd3, RDD<D> rdd4, scala.Function4<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<D>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$28, scala.reflect.ClassTag<C> evidence$29, scala.reflect.ClassTag<D> evidence$30, scala.reflect.ClassTag<V> evidence$31)
     
    <B, C, V> RDD<V>
    zipPartitions(RDD<B> rdd2, RDD<C> rdd3, scala.Function3<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$21, scala.reflect.ClassTag<C> evidence$22, scala.reflect.ClassTag<V> evidence$23)
     
    <B, V> RDD<V>
    zipPartitions(RDD<B> rdd2, scala.Function2<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$16, scala.reflect.ClassTag<V> evidence$17)
     
    <U> RDD<U>
    zipPartitionsWithEvaluator(RDD<T> rdd2, PartitionEvaluatorFactory<T,U> evaluatorFactory, scala.reflect.ClassTag<U> evidence$11)
    Zip this RDD's partitions with another RDD and return a new RDD by applying an evaluator to the zipped partitions.
    RDD<scala.Tuple2<T,Object>>
    Zips this RDD with its element indices.
    RDD<scala.Tuple2<T,Object>>
    Zips this RDD with generated unique Long ids.

    Methods inherited from class java.lang.Object

    equals, getClass, hashCode, notify, notifyAll, wait, wait, wait

    Methods inherited from interface org.apache.spark.internal.Logging

    initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq
  • Constructor Details

    • RDD

      public RDD(SparkContext _sc, scala.collection.Seq<Dependency<?>> deps, scala.reflect.ClassTag<T> evidence$1)
    • RDD

      public RDD(RDD<?> oneParent, scala.reflect.ClassTag<T> evidence$2)
      Construct an RDD with just a one-to-one dependency on one parent
  • Method Details

    • rddToPairRDDFunctions

      public static <K, V> PairRDDFunctions<K,V> rddToPairRDDFunctions(RDD<scala.Tuple2<K,V>> rdd, scala.reflect.ClassTag<K> kt, scala.reflect.ClassTag<V> vt, scala.math.Ordering<K> ord)
    • rddToAsyncRDDActions

      public static <T> AsyncRDDActions<T> rddToAsyncRDDActions(RDD<T> rdd, scala.reflect.ClassTag<T> evidence$38)
    • rddToSequenceFileRDDFunctions

      public static <K, V> SequenceFileRDDFunctions<K,V> rddToSequenceFileRDDFunctions(RDD<scala.Tuple2<K,V>> rdd, scala.reflect.ClassTag<K> kt, scala.reflect.ClassTag<V> vt, <any> keyWritableFactory, <any> valueWritableFactory)
    • rddToOrderedRDDFunctions

      public static <K, V> OrderedRDDFunctions<K,V,scala.Tuple2<K,V>> rddToOrderedRDDFunctions(RDD<scala.Tuple2<K,V>> rdd, scala.math.Ordering<K> evidence$39, scala.reflect.ClassTag<K> evidence$40, scala.reflect.ClassTag<V> evidence$41)
    • doubleRDDToDoubleRDDFunctions

      public static DoubleRDDFunctions doubleRDDToDoubleRDDFunctions(RDD<Object> rdd)
    • numericRDDToDoubleRDDFunctions

      public static <T> DoubleRDDFunctions numericRDDToDoubleRDDFunctions(RDD<T> rdd, scala.math.Numeric<T> num)
    • compute

      public abstract scala.collection.Iterator<T> compute(Partition split, TaskContext context)
      :: DeveloperApi :: Implemented by subclasses to compute a given partition.
      Parameters:
      split - (undocumented)
      context - (undocumented)
      Returns:
      (undocumented)
    • partitioner

      public scala.Option<Partitioner> partitioner()
      Optionally overridden by subclasses to specify how they are partitioned.
    • sparkContext

      public SparkContext sparkContext()
      The SparkContext that created this RDD.
    • id

      public int id()
      A unique ID for this RDD (within its SparkContext).
    • name

      public String name()
      A friendly name for this RDD
    • setName

      public RDD<T> setName(String _name)
      Assign a name to this RDD
    • persist

      public RDD<T> persist(StorageLevel newLevel)
      Set this RDD's storage level to persist its values across operations after the first time it is computed. This can only be used to assign a new storage level if the RDD does not have a storage level set yet. Local checkpointing is an exception.
      Parameters:
      newLevel - (undocumented)
      Returns:
      (undocumented)
    • persist

      public RDD<T> persist()
      Persist this RDD with the default storage level (MEMORY_ONLY).
      Returns:
      (undocumented)
    • cache

      public RDD<T> cache()
      Persist this RDD with the default storage level (MEMORY_ONLY).
      Returns:
      (undocumented)
    • unpersist

      public RDD<T> unpersist(boolean blocking)
      Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.

      Parameters:
      blocking - Whether to block until all blocks are deleted (default: false)
      Returns:
      This RDD.
    • getStorageLevel

      public StorageLevel getStorageLevel()
      Get the RDD's current storage level, or StorageLevel.NONE if none is set.
    • dependencies

      public final scala.collection.Seq<Dependency<?>> dependencies()
      Get the list of dependencies of this RDD, taking into account whether the RDD is checkpointed or not.
      Returns:
      (undocumented)
    • partitions

      public final Partition[] partitions()
      Get the array of partitions of this RDD, taking into account whether the RDD is checkpointed or not.
      Returns:
      (undocumented)
    • getNumPartitions

      public final int getNumPartitions()
      Returns the number of partitions of this RDD.
      Returns:
      (undocumented)
    • preferredLocations

      public final scala.collection.Seq<String> preferredLocations(Partition split)
      Get the preferred locations of a partition, taking into account whether the RDD is checkpointed.
      Parameters:
      split - (undocumented)
      Returns:
      (undocumented)
    • iterator

      public final scala.collection.Iterator<T> iterator(Partition split, TaskContext context)
      Internal method to this RDD; will read from cache if applicable, or otherwise compute it. This should ''not'' be called by users directly, but is available for implementers of custom subclasses of RDD.
      Parameters:
      split - (undocumented)
      context - (undocumented)
      Returns:
      (undocumented)
    • map

      public <U> RDD<U> map(scala.Function1<T,U> f, scala.reflect.ClassTag<U> evidence$3)
      Return a new RDD by applying a function to all elements of this RDD.
      Parameters:
      f - (undocumented)
      evidence$3 - (undocumented)
      Returns:
      (undocumented)
    • flatMap

      public <U> RDD<U> flatMap(scala.Function1<T,scala.collection.TraversableOnce<U>> f, scala.reflect.ClassTag<U> evidence$4)
      Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
      Parameters:
      f - (undocumented)
      evidence$4 - (undocumented)
      Returns:
      (undocumented)
    • filter

      public RDD<T> filter(scala.Function1<T,Object> f)
      Return a new RDD containing only the elements that satisfy a predicate.
      Parameters:
      f - (undocumented)
      Returns:
      (undocumented)
    • distinct

      public RDD<T> distinct(int numPartitions, scala.math.Ordering<T> ord)
      Return a new RDD containing the distinct elements in this RDD.
      Parameters:
      numPartitions - (undocumented)
      ord - (undocumented)
      Returns:
      (undocumented)
    • distinct

      public RDD<T> distinct()
      Return a new RDD containing the distinct elements in this RDD.
      Returns:
      (undocumented)
    • repartition

      public RDD<T> repartition(int numPartitions, scala.math.Ordering<T> ord)
      Return a new RDD that has exactly numPartitions partitions.

      Can increase or decrease the level of parallelism in this RDD. Internally, this uses a shuffle to redistribute data.

      If you are decreasing the number of partitions in this RDD, consider using coalesce, which can avoid performing a shuffle.

      Parameters:
      numPartitions - (undocumented)
      ord - (undocumented)
      Returns:
      (undocumented)
    • coalesce

      public RDD<T> coalesce(int numPartitions, boolean shuffle, scala.Option<PartitionCoalescer> partitionCoalescer, scala.math.Ordering<T> ord)
      Return a new RDD that is reduced into numPartitions partitions.

      This results in a narrow dependency, e.g. if you go from 1000 partitions to 100 partitions, there will not be a shuffle, instead each of the 100 new partitions will claim 10 of the current partitions. If a larger number of partitions is requested, it will stay at the current number of partitions.

      However, if you're doing a drastic coalesce, e.g. to numPartitions = 1, this may result in your computation taking place on fewer nodes than you like (e.g. one node in the case of numPartitions = 1). To avoid this, you can pass shuffle = true. This will add a shuffle step, but means the current upstream partitions will be executed in parallel (per whatever the current partitioning is).

      Parameters:
      numPartitions - (undocumented)
      shuffle - (undocumented)
      partitionCoalescer - (undocumented)
      ord - (undocumented)
      Returns:
      (undocumented)
      Note:
      With shuffle = true, you can actually coalesce to a larger number of partitions. This is useful if you have a small number of partitions, say 100, potentially with a few partitions being abnormally large. Calling coalesce(1000, shuffle = true) will result in 1000 partitions with the data distributed using a hash partitioner. The optional partition coalescer passed in must be serializable.
    • sample

      public RDD<T> sample(boolean withReplacement, double fraction, long seed)
      Return a sampled subset of this RDD.

      Parameters:
      withReplacement - can elements be sampled multiple times (replaced when sampled out)
      fraction - expected size of the sample as a fraction of this RDD's size without replacement: probability that each element is chosen; fraction must be [0, 1] with replacement: expected number of times each element is chosen; fraction must be greater than or equal to 0
      seed - seed for the random number generator

      Returns:
      (undocumented)
      Note:
      This is NOT guaranteed to provide exactly the fraction of the count of the given RDD.
    • randomSplit

      public RDD<T>[] randomSplit(double[] weights, long seed)
      Randomly splits this RDD with the provided weights.

      Parameters:
      weights - weights for splits, will be normalized if they don't sum to 1
      seed - random seed

      Returns:
      split RDDs in an array
    • takeSample

      public Object takeSample(boolean withReplacement, int num, long seed)
      Return a fixed-size sampled subset of this RDD in an array

      Parameters:
      withReplacement - whether sampling is done with replacement
      num - size of the returned sample
      seed - seed for the random number generator
      Returns:
      sample of specified size in an array

      Note:
      this method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.
    • union

      public RDD<T> union(RDD<T> other)
      Return the union of this RDD and another one. Any identical elements will appear multiple times (use .distinct() to eliminate them).
      Parameters:
      other - (undocumented)
      Returns:
      (undocumented)
    • sortBy

      public <K> RDD<T> sortBy(scala.Function1<T,K> f, boolean ascending, int numPartitions, scala.math.Ordering<K> ord, scala.reflect.ClassTag<K> ctag)
      Return this RDD sorted by the given key function.
      Parameters:
      f - (undocumented)
      ascending - (undocumented)
      numPartitions - (undocumented)
      ord - (undocumented)
      ctag - (undocumented)
      Returns:
      (undocumented)
    • intersection

      public RDD<T> intersection(RDD<T> other)
      Return the intersection of this RDD and another one. The output will not contain any duplicate elements, even if the input RDDs did.

      Parameters:
      other - (undocumented)
      Returns:
      (undocumented)
      Note:
      This method performs a shuffle internally.
    • intersection

      public RDD<T> intersection(RDD<T> other, Partitioner partitioner, scala.math.Ordering<T> ord)
      Return the intersection of this RDD and another one. The output will not contain any duplicate elements, even if the input RDDs did.

      Parameters:
      partitioner - Partitioner to use for the resulting RDD
      other - (undocumented)
      ord - (undocumented)
      Returns:
      (undocumented)
      Note:
      This method performs a shuffle internally.

    • intersection

      public RDD<T> intersection(RDD<T> other, int numPartitions)
      Return the intersection of this RDD and another one. The output will not contain any duplicate elements, even if the input RDDs did. Performs a hash partition across the cluster

      Parameters:
      numPartitions - How many partitions to use in the resulting RDD
      other - (undocumented)
      Returns:
      (undocumented)
      Note:
      This method performs a shuffle internally.

    • glom

      public RDD<Object> glom()
      Return an RDD created by coalescing all elements within each partition into an array.
      Returns:
      (undocumented)
    • cartesian

      public <U> RDD<scala.Tuple2<T,U>> cartesian(RDD<U> other, scala.reflect.ClassTag<U> evidence$5)
      Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in this and b is in other.
      Parameters:
      other - (undocumented)
      evidence$5 - (undocumented)
      Returns:
      (undocumented)
    • groupBy

      public <K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>> groupBy(scala.Function1<T,K> f, scala.reflect.ClassTag<K> kt)
      Return an RDD of grouped items. Each group consists of a key and a sequence of elements mapping to that key. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.

      Parameters:
      f - (undocumented)
      kt - (undocumented)
      Returns:
      (undocumented)
      Note:
      This operation may be very expensive. If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using PairRDDFunctions.aggregateByKey or PairRDDFunctions.reduceByKey will provide much better performance.
    • groupBy

      public <K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>> groupBy(scala.Function1<T,K> f, int numPartitions, scala.reflect.ClassTag<K> kt)
      Return an RDD of grouped elements. Each group consists of a key and a sequence of elements mapping to that key. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.

      Parameters:
      f - (undocumented)
      numPartitions - (undocumented)
      kt - (undocumented)
      Returns:
      (undocumented)
      Note:
      This operation may be very expensive. If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using PairRDDFunctions.aggregateByKey or PairRDDFunctions.reduceByKey will provide much better performance.
    • groupBy

      public <K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>> groupBy(scala.Function1<T,K> f, Partitioner p, scala.reflect.ClassTag<K> kt, scala.math.Ordering<K> ord)
      Return an RDD of grouped items. Each group consists of a key and a sequence of elements mapping to that key. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.

      Parameters:
      f - (undocumented)
      p - (undocumented)
      kt - (undocumented)
      ord - (undocumented)
      Returns:
      (undocumented)
      Note:
      This operation may be very expensive. If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using PairRDDFunctions.aggregateByKey or PairRDDFunctions.reduceByKey will provide much better performance.
    • pipe

      public RDD<String> pipe(String command)
      Return an RDD created by piping elements to a forked external process.
      Parameters:
      command - (undocumented)
      Returns:
      (undocumented)
    • pipe

      public RDD<String> pipe(String command, scala.collection.Map<String,String> env)
      Return an RDD created by piping elements to a forked external process.
      Parameters:
      command - (undocumented)
      env - (undocumented)
      Returns:
      (undocumented)
    • pipe

      public RDD<String> pipe(scala.collection.Seq<String> command, scala.collection.Map<String,String> env, scala.Function1<scala.Function1<String,scala.runtime.BoxedUnit>,scala.runtime.BoxedUnit> printPipeContext, scala.Function2<T,scala.Function1<String,scala.runtime.BoxedUnit>,scala.runtime.BoxedUnit> printRDDElement, boolean separateWorkingDir, int bufferSize, String encoding)
      Return an RDD created by piping elements to a forked external process. The resulting RDD is computed by executing the given process once per partition. All elements of each input partition are written to a process's stdin as lines of input separated by a newline. The resulting partition consists of the process's stdout output, with each line of stdout resulting in one element of the output partition. A process is invoked even for empty partitions.

      The print behavior can be customized by providing two functions.

      Parameters:
      command - command to run in forked process.
      env - environment variables to set.
      printPipeContext - Before piping elements, this function is called as an opportunity to pipe context data. Print line function (like out.println) will be passed as printPipeContext's parameter.
      printRDDElement - Use this function to customize how to pipe elements. This function will be called with each RDD element as the 1st parameter, and the print line function (like out.println()) as the 2nd parameter. An example of pipe the RDD data of groupBy() in a streaming way, instead of constructing a huge String to concat all the elements:
      
                              def printRDDElement(record:(String, Seq[String]), f:String=>Unit) =
                                for (e <- record._2) {f(e)}
                              
      separateWorkingDir - Use separate working directories for each task.
      bufferSize - Buffer size for the stdin writer for the piped process.
      encoding - Char encoding used for interacting (via stdin, stdout and stderr) with the piped process
      Returns:
      the result RDD
    • mapPartitions

      public <U> RDD<U> mapPartitions(scala.Function1<scala.collection.Iterator<T>,scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$6)
      Return a new RDD by applying a function to each partition of this RDD.

      preservesPartitioning indicates whether the input function preserves the partitioner, which should be false unless this is a pair RDD and the input function doesn't modify the keys.

      Parameters:
      f - (undocumented)
      preservesPartitioning - (undocumented)
      evidence$6 - (undocumented)
      Returns:
      (undocumented)
    • mapPartitionsWithIndex

      public <U> RDD<U> mapPartitionsWithIndex(scala.Function2<Object,scala.collection.Iterator<T>,scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$9)
      Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.

      preservesPartitioning indicates whether the input function preserves the partitioner, which should be false unless this is a pair RDD and the input function doesn't modify the keys.

      Parameters:
      f - (undocumented)
      preservesPartitioning - (undocumented)
      evidence$9 - (undocumented)
      Returns:
      (undocumented)
    • mapPartitionsWithEvaluator

      public <U> RDD<U> mapPartitionsWithEvaluator(PartitionEvaluatorFactory<T,U> evaluatorFactory, scala.reflect.ClassTag<U> evidence$10)
      Return a new RDD by applying an evaluator to each partition of this RDD. The given evaluator factory will be serialized and sent to executors, and each task will create an evaluator with the factory, and use the evaluator to transform the data of the input partition.
      Parameters:
      evaluatorFactory - (undocumented)
      evidence$10 - (undocumented)
      Returns:
      (undocumented)
    • zipPartitionsWithEvaluator

      public <U> RDD<U> zipPartitionsWithEvaluator(RDD<T> rdd2, PartitionEvaluatorFactory<T,U> evaluatorFactory, scala.reflect.ClassTag<U> evidence$11)
      Zip this RDD's partitions with another RDD and return a new RDD by applying an evaluator to the zipped partitions. Assumes that the two RDDs have the *same number of partitions*, but does *not* require them to have the same number of elements in each partition.
      Parameters:
      rdd2 - (undocumented)
      evaluatorFactory - (undocumented)
      evidence$11 - (undocumented)
      Returns:
      (undocumented)
    • zip

      public <U> RDD<scala.Tuple2<T,U>> zip(RDD<U> other, scala.reflect.ClassTag<U> evidence$13)
      Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc. Assumes that the two RDDs have the *same number of partitions* and the *same number of elements in each partition* (e.g. one was made through a map on the other).
      Parameters:
      other - (undocumented)
      evidence$13 - (undocumented)
      Returns:
      (undocumented)
    • zipPartitions

      public <B, V> RDD<V> zipPartitions(RDD<B> rdd2, boolean preservesPartitioning, scala.Function2<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$14, scala.reflect.ClassTag<V> evidence$15)
      Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD by applying a function to the zipped partitions. Assumes that all the RDDs have the *same number of partitions*, but does *not* require them to have the same number of elements in each partition.
      Parameters:
      rdd2 - (undocumented)
      preservesPartitioning - (undocumented)
      f - (undocumented)
      evidence$14 - (undocumented)
      evidence$15 - (undocumented)
      Returns:
      (undocumented)
    • zipPartitions

      public <B, V> RDD<V> zipPartitions(RDD<B> rdd2, scala.Function2<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$16, scala.reflect.ClassTag<V> evidence$17)
    • zipPartitions

      public <B, C, V> RDD<V> zipPartitions(RDD<B> rdd2, RDD<C> rdd3, boolean preservesPartitioning, scala.Function3<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$18, scala.reflect.ClassTag<C> evidence$19, scala.reflect.ClassTag<V> evidence$20)
    • zipPartitions

      public <B, C, V> RDD<V> zipPartitions(RDD<B> rdd2, RDD<C> rdd3, scala.Function3<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$21, scala.reflect.ClassTag<C> evidence$22, scala.reflect.ClassTag<V> evidence$23)
    • zipPartitions

      public <B, C, D, V> RDD<V> zipPartitions(RDD<B> rdd2, RDD<C> rdd3, RDD<D> rdd4, boolean preservesPartitioning, scala.Function4<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<D>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$24, scala.reflect.ClassTag<C> evidence$25, scala.reflect.ClassTag<D> evidence$26, scala.reflect.ClassTag<V> evidence$27)
    • zipPartitions

      public <B, C, D, V> RDD<V> zipPartitions(RDD<B> rdd2, RDD<C> rdd3, RDD<D> rdd4, scala.Function4<scala.collection.Iterator<T>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<D>,scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$28, scala.reflect.ClassTag<C> evidence$29, scala.reflect.ClassTag<D> evidence$30, scala.reflect.ClassTag<V> evidence$31)
    • foreach

      public void foreach(scala.Function1<T,scala.runtime.BoxedUnit> f)
      Applies a function f to all elements of this RDD.
      Parameters:
      f - (undocumented)
    • foreachPartition

      public void foreachPartition(scala.Function1<scala.collection.Iterator<T>,scala.runtime.BoxedUnit> f)
      Applies a function f to each partition of this RDD.
      Parameters:
      f - (undocumented)
    • collect

      public Object collect()
      Return an array that contains all of the elements in this RDD.

      Returns:
      (undocumented)
      Note:
      This method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.
    • toLocalIterator

      public scala.collection.Iterator<T> toLocalIterator()
      Return an iterator that contains all of the elements in this RDD.

      The iterator will consume as much memory as the largest partition in this RDD.

      Returns:
      (undocumented)
      Note:
      This results in multiple Spark jobs, and if the input RDD is the result of a wide transformation (e.g. join with different partitioners), to avoid recomputing the input RDD should be cached first.
    • collect

      public <U> RDD<U> collect(scala.PartialFunction<T,U> f, scala.reflect.ClassTag<U> evidence$32)
      Return an RDD that contains all matching values by applying f.
      Parameters:
      f - (undocumented)
      evidence$32 - (undocumented)
      Returns:
      (undocumented)
    • subtract

      public RDD<T> subtract(RDD<T> other)
      Return an RDD with the elements from this that are not in other.

      Uses this partitioner/partition size, because even if other is huge, the resulting RDD will be &lt;= us.

      Parameters:
      other - (undocumented)
      Returns:
      (undocumented)
    • subtract

      public RDD<T> subtract(RDD<T> other, int numPartitions)
      Return an RDD with the elements from this that are not in other.
      Parameters:
      other - (undocumented)
      numPartitions - (undocumented)
      Returns:
      (undocumented)
    • subtract

      public RDD<T> subtract(RDD<T> other, Partitioner p, scala.math.Ordering<T> ord)
      Return an RDD with the elements from this that are not in other.
      Parameters:
      other - (undocumented)
      p - (undocumented)
      ord - (undocumented)
      Returns:
      (undocumented)
    • reduce

      public T reduce(scala.Function2<T,T,T> f)
      Reduces the elements of this RDD using the specified commutative and associative binary operator.
      Parameters:
      f - (undocumented)
      Returns:
      (undocumented)
    • treeReduce

      public T treeReduce(scala.Function2<T,T,T> f, int depth)
      Reduces the elements of this RDD in a multi-level tree pattern.

      Parameters:
      depth - suggested depth of the tree (default: 2)
      f - (undocumented)
      Returns:
      (undocumented)
      See Also:
    • fold

      public T fold(T zeroValue, scala.Function2<T,T,T> op)
      Aggregate the elements of each partition, and then the results for all the partitions, using a given associative function and a neutral "zero value". The function op(t1, t2) is allowed to modify t1 and return it as its result value to avoid object allocation; however, it should not modify t2.

      This behaves somewhat differently from fold operations implemented for non-distributed collections in functional languages like Scala. This fold operation may be applied to partitions individually, and then fold those results into the final result, rather than apply the fold to each element sequentially in some defined ordering. For functions that are not commutative, the result may differ from that of a fold applied to a non-distributed collection.

      Parameters:
      zeroValue - the initial value for the accumulated result of each partition for the op operator, and also the initial value for the combine results from different partitions for the op operator - this will typically be the neutral element (e.g. Nil for list concatenation or 0 for summation)
      op - an operator used to both accumulate results within a partition and combine results from different partitions
      Returns:
      (undocumented)
    • aggregate

      public <U> U aggregate(U zeroValue, scala.Function2<U,T,U> seqOp, scala.Function2<U,U,U> combOp, scala.reflect.ClassTag<U> evidence$33)
      Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of this RDD, T. Thus, we need one operation for merging a T into an U and one operation for merging two U's, as in scala.TraversableOnce. Both of these functions are allowed to modify and return their first argument instead of creating a new U to avoid memory allocation.

      Parameters:
      zeroValue - the initial value for the accumulated result of each partition for the seqOp operator, and also the initial value for the combine results from different partitions for the combOp operator - this will typically be the neutral element (e.g. Nil for list concatenation or 0 for summation)
      seqOp - an operator used to accumulate results within a partition
      combOp - an associative operator used to combine results from different partitions
      evidence$33 - (undocumented)
      Returns:
      (undocumented)
    • treeAggregate

      public <U> U treeAggregate(U zeroValue, scala.Function2<U,T,U> seqOp, scala.Function2<U,U,U> combOp, int depth, scala.reflect.ClassTag<U> evidence$34)
      Aggregates the elements of this RDD in a multi-level tree pattern. This method is semantically identical to aggregate(U, scala.Function2<U, T, U>, scala.Function2<U, U, U>, scala.reflect.ClassTag<U>).

      Parameters:
      depth - suggested depth of the tree (default: 2)
      zeroValue - (undocumented)
      seqOp - (undocumented)
      combOp - (undocumented)
      evidence$34 - (undocumented)
      Returns:
      (undocumented)
    • treeAggregate

      public <U> U treeAggregate(U zeroValue, scala.Function2<U,T,U> seqOp, scala.Function2<U,U,U> combOp, int depth, boolean finalAggregateOnExecutor, scala.reflect.ClassTag<U> evidence$35)
      Parameters:
      finalAggregateOnExecutor - do final aggregation on executor
      zeroValue - (undocumented)
      seqOp - (undocumented)
      combOp - (undocumented)
      depth - (undocumented)
      evidence$35 - (undocumented)
      Returns:
      (undocumented)
    • count

      public long count()
      Return the number of elements in the RDD.
      Returns:
      (undocumented)
    • countApprox

      public PartialResult<BoundedDouble> countApprox(long timeout, double confidence)
      Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.

      The confidence is the probability that the error bounds of the result will contain the true value. That is, if countApprox were called repeatedly with confidence 0.9, we would expect 90% of the results to contain the true count. The confidence must be in the range [0,1] or an exception will be thrown.

      Parameters:
      timeout - maximum time to wait for the job, in milliseconds
      confidence - the desired statistical confidence in the result
      Returns:
      a potentially incomplete result, with error bounds
    • countByValue

      public scala.collection.Map<T,Object> countByValue(scala.math.Ordering<T> ord)
      Return the count of each unique value in this RDD as a local map of (value, count) pairs.

      Parameters:
      ord - (undocumented)
      Returns:
      (undocumented)
      Note:
      This method should only be used if the resulting map is expected to be small, as the whole thing is loaded into the driver's memory. To handle very large results, consider using

      
       rdd.map(x => (x, 1L)).reduceByKey(_ + _)
       

      , which returns an RDD[T, Long] instead of a map.

    • countByValueApprox

      public PartialResult<scala.collection.Map<T,BoundedDouble>> countByValueApprox(long timeout, double confidence, scala.math.Ordering<T> ord)
      Approximate version of countByValue().

      Parameters:
      timeout - maximum time to wait for the job, in milliseconds
      confidence - the desired statistical confidence in the result
      ord - (undocumented)
      Returns:
      a potentially incomplete result, with error bounds
    • countApproxDistinct

      public long countApproxDistinct(int p, int sp)
      Return approximate number of distinct elements in the RDD.

      The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.

      The relative accuracy is approximately 1.054 / sqrt(2^p). Setting a nonzero (sp is greater than p) would trigger sparse representation of registers, which may reduce the memory consumption and increase accuracy when the cardinality is small.

      Parameters:
      p - The precision value for the normal set. p must be a value between 4 and sp if sp is not zero (32 max).
      sp - The precision value for the sparse set, between 0 and 32. If sp equals 0, the sparse representation is skipped.
      Returns:
      (undocumented)
    • countApproxDistinct

      public long countApproxDistinct(double relativeSD)
      Return approximate number of distinct elements in the RDD.

      The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.

      Parameters:
      relativeSD - Relative accuracy. Smaller values create counters that require more space. It must be greater than 0.000017.
      Returns:
      (undocumented)
    • zipWithIndex

      public RDD<scala.Tuple2<T,Object>> zipWithIndex()
      Zips this RDD with its element indices. The ordering is first based on the partition index and then the ordering of items within each partition. So the first item in the first partition gets index 0, and the last item in the last partition receives the largest index.

      This is similar to Scala's zipWithIndex but it uses Long instead of Int as the index type. This method needs to trigger a spark job when this RDD contains more than one partitions.

      Returns:
      (undocumented)
      Note:
      Some RDDs, such as those returned by groupBy(), do not guarantee order of elements in a partition. The index assigned to each element is therefore not guaranteed, and may even change if the RDD is reevaluated. If a fixed ordering is required to guarantee the same index assignments, you should sort the RDD with sortByKey() or save it to a file.
    • zipWithUniqueId

      public RDD<scala.Tuple2<T,Object>> zipWithUniqueId()
      Zips this RDD with generated unique Long ids. Items in the kth partition will get ids k, n+k, 2*n+k, ..., where n is the number of partitions. So there may exist gaps, but this method won't trigger a spark job, which is different from zipWithIndex().

      Returns:
      (undocumented)
      Note:
      Some RDDs, such as those returned by groupBy(), do not guarantee order of elements in a partition. The unique ID assigned to each element is therefore not guaranteed, and may even change if the RDD is reevaluated. If a fixed ordering is required to guarantee the same index assignments, you should sort the RDD with sortByKey() or save it to a file.
    • take

      public Object take(int num)
      Take the first num elements of the RDD. It works by first scanning one partition, and use the results from that partition to estimate the number of additional partitions needed to satisfy the limit.

      Parameters:
      num - (undocumented)
      Returns:
      (undocumented)
      Note:
      This method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.

      , Due to complications in the internal implementation, this method will raise an exception if called on an RDD of Nothing or Null.

    • first

      public T first()
      Return the first element in this RDD.
      Returns:
      (undocumented)
    • top

      public Object top(int num, scala.math.Ordering<T> ord)
      Returns the top k (largest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering. This does the opposite of takeOrdered(int,scala.math.Ordering<T>). For example:
      
         sc.parallelize(Seq(10, 4, 2, 12, 3)).top(1)
         // returns Array(12)
      
         sc.parallelize(Seq(2, 3, 4, 5, 6)).top(2)
         // returns Array(6, 5)
       

      Parameters:
      num - k, the number of top elements to return
      ord - the implicit ordering for T
      Returns:
      an array of top elements
      Note:
      This method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.

    • takeOrdered

      public Object takeOrdered(int num, scala.math.Ordering<T> ord)
      Returns the first k (smallest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering. This does the opposite of top(int,scala.math.Ordering<T>). For example:
      
         sc.parallelize(Seq(10, 4, 2, 12, 3)).takeOrdered(1)
         // returns Array(2)
      
         sc.parallelize(Seq(2, 3, 4, 5, 6)).takeOrdered(2)
         // returns Array(2, 3)
       

      Parameters:
      num - k, the number of elements to return
      ord - the implicit ordering for T
      Returns:
      an array of top elements
      Note:
      This method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.

    • max

      public T max(scala.math.Ordering<T> ord)
      Returns the max of this RDD as defined by the implicit Ordering[T].
      Parameters:
      ord - (undocumented)
      Returns:
      the maximum element of the RDD
    • min

      public T min(scala.math.Ordering<T> ord)
      Returns the min of this RDD as defined by the implicit Ordering[T].
      Parameters:
      ord - (undocumented)
      Returns:
      the minimum element of the RDD
    • isEmpty

      public boolean isEmpty()
      Returns:
      true if and only if the RDD contains no elements at all. Note that an RDD may be empty even when it has at least 1 partition.
      Note:
      Due to complications in the internal implementation, this method will raise an exception if called on an RDD of Nothing or Null. This may be come up in practice because, for example, the type of parallelize(Seq()) is RDD[Nothing]. (parallelize(Seq()) should be avoided anyway in favor of parallelize(Seq[T]()).)
    • saveAsTextFile

      public void saveAsTextFile(String path)
      Save this RDD as a text file, using string representations of elements.
      Parameters:
      path - (undocumented)
    • saveAsTextFile

      public void saveAsTextFile(String path, Class<? extends org.apache.hadoop.io.compress.CompressionCodec> codec)
      Save this RDD as a compressed text file, using string representations of elements.
      Parameters:
      path - (undocumented)
      codec - (undocumented)
    • saveAsObjectFile

      public void saveAsObjectFile(String path)
      Save this RDD as a SequenceFile of serialized objects.
      Parameters:
      path - (undocumented)
    • keyBy

      public <K> RDD<scala.Tuple2<K,T>> keyBy(scala.Function1<T,K> f)
      Creates tuples of the elements in this RDD by applying f.
      Parameters:
      f - (undocumented)
      Returns:
      (undocumented)
    • checkpoint

      public void checkpoint()
      Mark this RDD for checkpointing. It will be saved to a file inside the checkpoint directory set with SparkContext#setCheckpointDir and all references to its parent RDDs will be removed. This function must be called before any job has been executed on this RDD. It is strongly recommended that this RDD is persisted in memory, otherwise saving it on a file will require recomputation.
    • localCheckpoint

      public RDD<T> localCheckpoint()
      Mark this RDD for local checkpointing using Spark's existing caching layer.

      This method is for users who wish to truncate RDD lineages while skipping the expensive step of replicating the materialized data in a reliable distributed file system. This is useful for RDDs with long lineages that need to be truncated periodically (e.g. GraphX).

      Local checkpointing sacrifices fault-tolerance for performance. In particular, checkpointed data is written to ephemeral local storage in the executors instead of to a reliable, fault-tolerant storage. The effect is that if an executor fails during the computation, the checkpointed data may no longer be accessible, causing an irrecoverable job failure.

      This is NOT safe to use with dynamic allocation, which removes executors along with their cached blocks. If you must use both features, you are advised to set spark.dynamicAllocation.cachedExecutorIdleTimeout to a high value.

      The checkpoint directory set through SparkContext#setCheckpointDir is not used.

      Returns:
      (undocumented)
    • isCheckpointed

      public boolean isCheckpointed()
      Return whether this RDD is checkpointed and materialized, either reliably or locally.
      Returns:
      (undocumented)
    • getCheckpointFile

      public scala.Option<String> getCheckpointFile()
      Gets the name of the directory to which this RDD was checkpointed. This is not defined if the RDD is checkpointed locally.
      Returns:
      (undocumented)
    • cleanShuffleDependencies

      public void cleanShuffleDependencies(boolean blocking)
      Removes an RDD's shuffles and it's non-persisted ancestors. When running without a shuffle service, cleaning up shuffle files enables downscaling. If you use the RDD after this call, you should checkpoint and materialize it first. If you are uncertain of what you are doing, please do not use this feature. Additional techniques for mitigating orphaned shuffle files: * Tuning the driver GC to be more aggressive, so the regular context cleaner is triggered * Setting an appropriate TTL for shuffle files to be auto cleaned
      Parameters:
      blocking - (undocumented)
    • barrier

      public RDDBarrier<T> barrier()
      :: Experimental :: Marks the current stage as a barrier stage, where Spark must launch all tasks together. In case of a task failure, instead of only restarting the failed task, Spark will abort the entire stage and re-launch all tasks for this stage. The barrier execution mode feature is experimental and it only handles limited scenarios. Please read the linked SPIP and design docs to understand the limitations and future plans.
      Returns:
      an RDDBarrier instance that provides actions within a barrier stage
      See Also:
    • withResources

      public RDD<T> withResources(ResourceProfile rp)
      Specify a ResourceProfile to use when calculating this RDD. This is only supported on certain cluster managers and currently requires dynamic allocation to be enabled. It will result in new executors with the resources specified being acquired to calculate the RDD.
      Parameters:
      rp - (undocumented)
      Returns:
      (undocumented)
    • getResourceProfile

      public ResourceProfile getResourceProfile()
      Get the ResourceProfile specified with this RDD or null if it wasn't specified.
      Returns:
      the user specified ResourceProfile or null (for Java compatibility) if none was specified
    • context

      public SparkContext context()
      The SparkContext that this RDD was created on.
    • toDebugString

      public String toDebugString()
      A description of this RDD and its recursive dependencies for debugging.
    • toString

      public String toString()
      Overrides:
      toString in class Object
    • toJavaRDD

      public JavaRDD<T> toJavaRDD()