Class RDD<T>
- All Implemented Interfaces:
Serializable
,org.apache.spark.internal.Logging
- Direct Known Subclasses:
BaseRRDD
,CoGroupedRDD
,EdgeRDD
,HadoopRDD
,JdbcRDD
,NewHadoopRDD
,PartitionPruningRDD
,ShuffledRDD
,UnionRDD
,VertexRDD
map
, filter
, and persist
. In addition,
PairRDDFunctions
contains operations available only on RDDs of key-value
pairs, such as groupByKey
and join
;
DoubleRDDFunctions
contains operations available only on RDDs of
Doubles; and
SequenceFileRDDFunctions
contains operations available on RDDs that
can be saved as SequenceFiles.
All operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)])
through implicit.
Internally, each RDD is characterized by five main properties:
- A list of partitions - A function for computing each split - A list of dependencies on other RDDs - Optionally, a Partitioner for key-value RDDs (e.g. to say that the RDD is hash-partitioned) - Optionally, a list of preferred locations to compute each split on (e.g. block locations for an HDFS file)
All of the scheduling and execution in Spark is done based on these methods, allowing each RDD to implement its own way of computing itself. Indeed, users can implement custom RDDs (e.g. for reading data from a new storage system) by overriding these functions. Please refer to the Spark paper for more details on RDD internals.
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
-
Constructor Summary
ConstructorDescriptionConstruct an RDD with just a one-to-one dependency on one parentRDD
(SparkContext _sc, scala.collection.immutable.Seq<Dependency<?>> deps, scala.reflect.ClassTag<T> evidence$1) -
Method Summary
Modifier and TypeMethodDescription<U> U
aggregate
(U zeroValue, scala.Function2<U, T, U> seqOp, scala.Function2<U, U, U> combOp, scala.reflect.ClassTag<U> evidence$33) Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value".barrier()
:: Experimental :: Marks the current stage as a barrier stage, where Spark must launch all tasks together.cache()
Persist this RDD with the default storage level (MEMORY_ONLY
).Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is inthis
and b is inother
.void
Mark this RDD for checkpointing.void
cleanShuffleDependencies
(boolean blocking) Removes an RDD's shuffles and it's non-persisted ancestors.coalesce
(int numPartitions, boolean shuffle, scala.Option<PartitionCoalescer> partitionCoalescer, scala.math.Ordering<T> ord) Return a new RDD that is reduced intonumPartitions
partitions.collect()
Return an array that contains all of the elements in this RDD.<U> RDD<U>
Return an RDD that contains all matching values by applyingf
.abstract scala.collection.Iterator<T>
compute
(Partition split, TaskContext context) :: DeveloperApi :: Implemented by subclasses to compute a given partition.context()
TheSparkContext
that this RDD was created on.long
count()
Return the number of elements in the RDD.countApprox
(long timeout, double confidence) Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.long
countApproxDistinct
(double relativeSD) Return approximate number of distinct elements in the RDD.long
countApproxDistinct
(int p, int sp) Return approximate number of distinct elements in the RDD.countByValue
(scala.math.Ordering<T> ord) Return the count of each unique value in this RDD as a local map of (value, count) pairs.PartialResult<scala.collection.Map<T,
BoundedDouble>> countByValueApprox
(long timeout, double confidence, scala.math.Ordering<T> ord) Approximate version of countByValue().final scala.collection.immutable.Seq<Dependency<?>>
Get the list of dependencies of this RDD, taking into account whether the RDD is checkpointed or not.distinct()
Return a new RDD containing the distinct elements in this RDD.Return a new RDD containing the distinct elements in this RDD.static DoubleRDDFunctions
Return a new RDD containing only the elements that satisfy a predicate.first()
Return the first element in this RDD.<U> RDD<U>
flatMap
(scala.Function1<T, scala.collection.IterableOnce<U>> f, scala.reflect.ClassTag<U> evidence$4) Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.Aggregate the elements of each partition, and then the results for all the partitions, using a given associative function and a neutral "zero value".void
Applies a function f to all elements of this RDD.void
foreachPartition
(scala.Function1<scala.collection.Iterator<T>, scala.runtime.BoxedUnit> f) Applies a function f to each partition of this RDD.scala.Option<String>
Gets the name of the directory to which this RDD was checkpointed.final int
Returns the number of partitions of this RDD.Get the ResourceProfile specified with this RDD or null if it wasn't specified.Get the RDD's current storage level, or StorageLevel.NONE if none is set.glom()
Return an RDD created by coalescing all elements within each partition into an array.Return an RDD of grouped elements.groupBy
(scala.Function1<T, K> f, Partitioner p, scala.reflect.ClassTag<K> kt, scala.math.Ordering<K> ord) Return an RDD of grouped items.Return an RDD of grouped items.int
id()
A unique ID for this RDD (within its SparkContext).intersection
(RDD<T> other) Return the intersection of this RDD and another one.intersection
(RDD<T> other, int numPartitions) Return the intersection of this RDD and another one.intersection
(RDD<T> other, Partitioner partitioner, scala.math.Ordering<T> ord) Return the intersection of this RDD and another one.boolean
Return whether this RDD is checkpointed and materialized, either reliably or locally.boolean
isEmpty()
final scala.collection.Iterator<T>
iterator
(Partition split, TaskContext context) Internal method to this RDD; will read from cache if applicable, or otherwise compute it.Creates tuples of the elements in this RDD by applyingf
.Mark this RDD for local checkpointing using Spark's existing caching layer.<U> RDD<U>
Return a new RDD by applying a function to all elements of this RDD.<U> RDD<U>
mapPartitions
(scala.Function1<scala.collection.Iterator<T>, scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$6) Return a new RDD by applying a function to each partition of this RDD.<U> RDD<U>
mapPartitionsWithEvaluator
(PartitionEvaluatorFactory<T, U> evaluatorFactory, scala.reflect.ClassTag<U> evidence$10) Return a new RDD by applying an evaluator to each partition of this RDD.<U> RDD<U>
mapPartitionsWithIndex
(scala.Function2<Object, scala.collection.Iterator<T>, scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$9) Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.Returns the max of this RDD as defined by the implicit Ordering[T].Returns the min of this RDD as defined by the implicit Ordering[T].name()
A friendly name for this RDDstatic <T> DoubleRDDFunctions
numericRDDToDoubleRDDFunctions
(RDD<T> rdd, scala.math.Numeric<T> num) scala.Option<Partitioner>
Optionally overridden by subclasses to specify how they are partitioned.final Partition[]
Get the array of partitions of this RDD, taking into account whether the RDD is checkpointed or not.persist()
Persist this RDD with the default storage level (MEMORY_ONLY
).persist
(StorageLevel newLevel) Set this RDD's storage level to persist its values across operations after the first time it is computed.Return an RDD created by piping elements to a forked external process.Return an RDD created by piping elements to a forked external process.pipe
(scala.collection.immutable.Seq<String> command, scala.collection.Map<String, String> env, scala.Function1<scala.Function1<String, scala.runtime.BoxedUnit>, scala.runtime.BoxedUnit> printPipeContext, scala.Function2<T, scala.Function1<String, scala.runtime.BoxedUnit>, scala.runtime.BoxedUnit> printRDDElement, boolean separateWorkingDir, int bufferSize, String encoding) Return an RDD created by piping elements to a forked external process.final scala.collection.immutable.Seq<String>
preferredLocations
(Partition split) Get the preferred locations of a partition, taking into account whether the RDD is checkpointed.randomSplit
(double[] weights, long seed) Randomly splits this RDD with the provided weights.static <T> AsyncRDDActions<T>
rddToAsyncRDDActions
(RDD<T> rdd, scala.reflect.ClassTag<T> evidence$38) static <K,
V> OrderedRDDFunctions<K, V, scala.Tuple2<K, V>> rddToOrderedRDDFunctions
(RDD<scala.Tuple2<K, V>> rdd, scala.math.Ordering<K> evidence$39, scala.reflect.ClassTag<K> evidence$40, scala.reflect.ClassTag<V> evidence$41) static <K,
V> PairRDDFunctions<K, V> rddToPairRDDFunctions
(RDD<scala.Tuple2<K, V>> rdd, scala.reflect.ClassTag<K> kt, scala.reflect.ClassTag<V> vt, scala.math.Ordering<K> ord) static <K,
V> SequenceFileRDDFunctions<K, V> rddToSequenceFileRDDFunctions
(RDD<scala.Tuple2<K, V>> rdd, scala.reflect.ClassTag<K> kt, scala.reflect.ClassTag<V> vt, <any> keyWritableFactory, <any> valueWritableFactory) Reduces the elements of this RDD using the specified commutative and associative binary operator.repartition
(int numPartitions, scala.math.Ordering<T> ord) Return a new RDD that has exactly numPartitions partitions.sample
(boolean withReplacement, double fraction, long seed) Return a sampled subset of this RDD.void
saveAsObjectFile
(String path) Save this RDD as a SequenceFile of serialized objects.void
saveAsTextFile
(String path) Save this RDD as a text file, using string representations of elements.void
saveAsTextFile
(String path, Class<? extends org.apache.hadoop.io.compress.CompressionCodec> codec) Save this RDD as a compressed text file, using string representations of elements.Assign a name to this RDDsortBy
(scala.Function1<T, K> f, boolean ascending, int numPartitions, scala.math.Ordering<K> ord, scala.reflect.ClassTag<K> ctag) Return this RDD sorted by the given key function.The SparkContext that created this RDD.Return an RDD with the elements fromthis
that are not inother
.Return an RDD with the elements fromthis
that are not inother
.subtract
(RDD<T> other, Partitioner p, scala.math.Ordering<T> ord) Return an RDD with the elements fromthis
that are not inother
.take
(int num) Take the first num elements of the RDD.takeOrdered
(int num, scala.math.Ordering<T> ord) Returns the first k (smallest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering.takeSample
(boolean withReplacement, int num, long seed) Return a fixed-size sampled subset of this RDD in an arrayA description of this RDD and its recursive dependencies for debugging.scala.collection.Iterator<T>
Return an iterator that contains all of the elements in this RDD.Returns the top k (largest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering.toString()
<U> U
treeAggregate
(U zeroValue, scala.Function2<U, T, U> seqOp, scala.Function2<U, U, U> combOp, int depth, boolean finalAggregateOnExecutor, scala.reflect.ClassTag<U> evidence$35) treeAggregate(U, scala.Function2<U, T, U>, scala.Function2<U, U, U>, int, scala.reflect.ClassTag<U>)
with a parameter to do the final aggregation on the executor<U> U
treeAggregate
(U zeroValue, scala.Function2<U, T, U> seqOp, scala.Function2<U, U, U> combOp, int depth, scala.reflect.ClassTag<U> evidence$34) Aggregates the elements of this RDD in a multi-level tree pattern.treeReduce
(scala.Function2<T, T, T> f, int depth) Reduces the elements of this RDD in a multi-level tree pattern.Return the union of this RDD and another one.unpersist
(boolean blocking) Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.Specify a ResourceProfile to use when calculating this RDD.Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc.<B,
V> RDD<V> zipPartitions
(RDD<B> rdd2, boolean preservesPartitioning, scala.Function2<scala.collection.Iterator<T>, scala.collection.Iterator<B>, scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$14, scala.reflect.ClassTag<V> evidence$15) Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD by applying a function to the zipped partitions.<B,
C, V> RDD<V> zipPartitions
(RDD<B> rdd2, RDD<C> rdd3, boolean preservesPartitioning, scala.Function3<scala.collection.Iterator<T>, scala.collection.Iterator<B>, scala.collection.Iterator<C>, scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$18, scala.reflect.ClassTag<C> evidence$19, scala.reflect.ClassTag<V> evidence$20) <B,
C, D, V> RDD<V> zipPartitions
(RDD<B> rdd2, RDD<C> rdd3, RDD<D> rdd4, boolean preservesPartitioning, scala.Function4<scala.collection.Iterator<T>, scala.collection.Iterator<B>, scala.collection.Iterator<C>, scala.collection.Iterator<D>, scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$24, scala.reflect.ClassTag<C> evidence$25, scala.reflect.ClassTag<D> evidence$26, scala.reflect.ClassTag<V> evidence$27) <B,
C, D, V> RDD<V> zipPartitions
(RDD<B> rdd2, RDD<C> rdd3, RDD<D> rdd4, scala.Function4<scala.collection.Iterator<T>, scala.collection.Iterator<B>, scala.collection.Iterator<C>, scala.collection.Iterator<D>, scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$28, scala.reflect.ClassTag<C> evidence$29, scala.reflect.ClassTag<D> evidence$30, scala.reflect.ClassTag<V> evidence$31) <B,
C, V> RDD<V> zipPartitions
(RDD<B> rdd2, RDD<C> rdd3, scala.Function3<scala.collection.Iterator<T>, scala.collection.Iterator<B>, scala.collection.Iterator<C>, scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$21, scala.reflect.ClassTag<C> evidence$22, scala.reflect.ClassTag<V> evidence$23) <B,
V> RDD<V> zipPartitions
(RDD<B> rdd2, scala.Function2<scala.collection.Iterator<T>, scala.collection.Iterator<B>, scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$16, scala.reflect.ClassTag<V> evidence$17) <U> RDD<U>
zipPartitionsWithEvaluator
(RDD<T> rdd2, PartitionEvaluatorFactory<T, U> evaluatorFactory, scala.reflect.ClassTag<U> evidence$11) Zip this RDD's partitions with another RDD and return a new RDD by applying an evaluator to the zipped partitions.Zips this RDD with its element indices.Zips this RDD with generated unique Long ids.Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext
-
Constructor Details
-
RDD
public RDD(SparkContext _sc, scala.collection.immutable.Seq<Dependency<?>> deps, scala.reflect.ClassTag<T> evidence$1) -
RDD
Construct an RDD with just a one-to-one dependency on one parent
-
-
Method Details
-
rddToPairRDDFunctions
public static <K,V> PairRDDFunctions<K,V> rddToPairRDDFunctions(RDD<scala.Tuple2<K, V>> rdd, scala.reflect.ClassTag<K> kt, scala.reflect.ClassTag<V> vt, scala.math.Ordering<K> ord) -
rddToAsyncRDDActions
public static <T> AsyncRDDActions<T> rddToAsyncRDDActions(RDD<T> rdd, scala.reflect.ClassTag<T> evidence$38) -
rddToSequenceFileRDDFunctions
public static <K,V> SequenceFileRDDFunctions<K,V> rddToSequenceFileRDDFunctions(RDD<scala.Tuple2<K, V>> rdd, scala.reflect.ClassTag<K> kt, scala.reflect.ClassTag<V> vt, <any> keyWritableFactory, <any> valueWritableFactory) -
rddToOrderedRDDFunctions
public static <K,V> OrderedRDDFunctions<K,V, rddToOrderedRDDFunctionsscala.Tuple2<K, V>> (RDD<scala.Tuple2<K, V>> rdd, scala.math.Ordering<K> evidence$39, scala.reflect.ClassTag<K> evidence$40, scala.reflect.ClassTag<V> evidence$41) -
doubleRDDToDoubleRDDFunctions
-
numericRDDToDoubleRDDFunctions
public static <T> DoubleRDDFunctions numericRDDToDoubleRDDFunctions(RDD<T> rdd, scala.math.Numeric<T> num) -
compute
:: DeveloperApi :: Implemented by subclasses to compute a given partition.- Parameters:
split
- (undocumented)context
- (undocumented)- Returns:
- (undocumented)
-
partitioner
Optionally overridden by subclasses to specify how they are partitioned. -
sparkContext
The SparkContext that created this RDD. -
id
public int id()A unique ID for this RDD (within its SparkContext). -
name
A friendly name for this RDD -
setName
Assign a name to this RDD -
persist
Set this RDD's storage level to persist its values across operations after the first time it is computed. This can only be used to assign a new storage level if the RDD does not have a storage level set yet. Local checkpointing is an exception.- Parameters:
newLevel
- (undocumented)- Returns:
- (undocumented)
-
persist
Persist this RDD with the default storage level (MEMORY_ONLY
).- Returns:
- (undocumented)
-
cache
Persist this RDD with the default storage level (MEMORY_ONLY
).- Returns:
- (undocumented)
-
unpersist
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.- Parameters:
blocking
- Whether to block until all blocks are deleted (default: false)- Returns:
- This RDD.
-
getStorageLevel
Get the RDD's current storage level, or StorageLevel.NONE if none is set. -
dependencies
Get the list of dependencies of this RDD, taking into account whether the RDD is checkpointed or not.- Returns:
- (undocumented)
-
partitions
Get the array of partitions of this RDD, taking into account whether the RDD is checkpointed or not.- Returns:
- (undocumented)
-
getNumPartitions
public final int getNumPartitions()Returns the number of partitions of this RDD.- Returns:
- (undocumented)
-
preferredLocations
Get the preferred locations of a partition, taking into account whether the RDD is checkpointed.- Parameters:
split
- (undocumented)- Returns:
- (undocumented)
-
iterator
Internal method to this RDD; will read from cache if applicable, or otherwise compute it. This should ''not'' be called by users directly, but is available for implementers of custom subclasses of RDD.- Parameters:
split
- (undocumented)context
- (undocumented)- Returns:
- (undocumented)
-
map
Return a new RDD by applying a function to all elements of this RDD.- Parameters:
f
- (undocumented)evidence$3
- (undocumented)- Returns:
- (undocumented)
-
flatMap
public <U> RDD<U> flatMap(scala.Function1<T, scala.collection.IterableOnce<U>> f, scala.reflect.ClassTag<U> evidence$4) Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.- Parameters:
f
- (undocumented)evidence$4
- (undocumented)- Returns:
- (undocumented)
-
filter
Return a new RDD containing only the elements that satisfy a predicate.- Parameters:
f
- (undocumented)- Returns:
- (undocumented)
-
distinct
Return a new RDD containing the distinct elements in this RDD.- Parameters:
numPartitions
- (undocumented)ord
- (undocumented)- Returns:
- (undocumented)
-
distinct
Return a new RDD containing the distinct elements in this RDD.- Returns:
- (undocumented)
-
repartition
Return a new RDD that has exactly numPartitions partitions.Can increase or decrease the level of parallelism in this RDD. Internally, this uses a shuffle to redistribute data.
If you are decreasing the number of partitions in this RDD, consider using
coalesce
, which can avoid performing a shuffle.- Parameters:
numPartitions
- (undocumented)ord
- (undocumented)- Returns:
- (undocumented)
-
coalesce
public RDD<T> coalesce(int numPartitions, boolean shuffle, scala.Option<PartitionCoalescer> partitionCoalescer, scala.math.Ordering<T> ord) Return a new RDD that is reduced intonumPartitions
partitions.This results in a narrow dependency, e.g. if you go from 1000 partitions to 100 partitions, there will not be a shuffle, instead each of the 100 new partitions will claim 10 of the current partitions. If a larger number of partitions is requested, it will stay at the current number of partitions.
However, if you're doing a drastic coalesce, e.g. to numPartitions = 1, this may result in your computation taking place on fewer nodes than you like (e.g. one node in the case of numPartitions = 1). To avoid this, you can pass shuffle = true. This will add a shuffle step, but means the current upstream partitions will be executed in parallel (per whatever the current partitioning is).
- Parameters:
numPartitions
- (undocumented)shuffle
- (undocumented)partitionCoalescer
- (undocumented)ord
- (undocumented)- Returns:
- (undocumented)
- Note:
- With shuffle = true, you can actually coalesce to a larger number of partitions. This is useful if you have a small number of partitions, say 100, potentially with a few partitions being abnormally large. Calling coalesce(1000, shuffle = true) will result in 1000 partitions with the data distributed using a hash partitioner. The optional partition coalescer passed in must be serializable.
-
sample
Return a sampled subset of this RDD.- Parameters:
withReplacement
- can elements be sampled multiple times (replaced when sampled out)fraction
- expected size of the sample as a fraction of this RDD's size without replacement: probability that each element is chosen; fraction must be [0, 1] with replacement: expected number of times each element is chosen; fraction must be greater than or equal to 0seed
- seed for the random number generator- Returns:
- (undocumented)
- Note:
- This is NOT guaranteed to provide exactly the fraction of the count
of the given
RDD
.
-
randomSplit
Randomly splits this RDD with the provided weights.- Parameters:
weights
- weights for splits, will be normalized if they don't sum to 1seed
- random seed- Returns:
- split RDDs in an array
-
takeSample
Return a fixed-size sampled subset of this RDD in an array- Parameters:
withReplacement
- whether sampling is done with replacementnum
- size of the returned sampleseed
- seed for the random number generator- Returns:
- sample of specified size in an array
- Note:
- this method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.
-
union
Return the union of this RDD and another one. Any identical elements will appear multiple times (use.distinct()
to eliminate them).- Parameters:
other
- (undocumented)- Returns:
- (undocumented)
-
sortBy
public <K> RDD<T> sortBy(scala.Function1<T, K> f, boolean ascending, int numPartitions, scala.math.Ordering<K> ord, scala.reflect.ClassTag<K> ctag) Return this RDD sorted by the given key function.- Parameters:
f
- (undocumented)ascending
- (undocumented)numPartitions
- (undocumented)ord
- (undocumented)ctag
- (undocumented)- Returns:
- (undocumented)
-
intersection
Return the intersection of this RDD and another one. The output will not contain any duplicate elements, even if the input RDDs did.- Parameters:
other
- (undocumented)- Returns:
- (undocumented)
- Note:
- This method performs a shuffle internally.
-
intersection
Return the intersection of this RDD and another one. The output will not contain any duplicate elements, even if the input RDDs did.- Parameters:
partitioner
- Partitioner to use for the resulting RDDother
- (undocumented)ord
- (undocumented)- Returns:
- (undocumented)
- Note:
- This method performs a shuffle internally.
-
intersection
Return the intersection of this RDD and another one. The output will not contain any duplicate elements, even if the input RDDs did. Performs a hash partition across the cluster- Parameters:
numPartitions
- How many partitions to use in the resulting RDDother
- (undocumented)- Returns:
- (undocumented)
- Note:
- This method performs a shuffle internally.
-
glom
Return an RDD created by coalescing all elements within each partition into an array.- Returns:
- (undocumented)
-
cartesian
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is inthis
and b is inother
.- Parameters:
other
- (undocumented)evidence$5
- (undocumented)- Returns:
- (undocumented)
-
groupBy
public <K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>> groupBy(scala.Function1<T, K> f, scala.reflect.ClassTag<K> kt) Return an RDD of grouped items. Each group consists of a key and a sequence of elements mapping to that key. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.- Parameters:
f
- (undocumented)kt
- (undocumented)- Returns:
- (undocumented)
- Note:
- This operation may be very expensive. If you are grouping in order to perform an
aggregation (such as a sum or average) over each key, using
PairRDDFunctions.aggregateByKey
orPairRDDFunctions.reduceByKey
will provide much better performance.
-
groupBy
public <K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>> groupBy(scala.Function1<T, K> f, int numPartitions, scala.reflect.ClassTag<K> kt) Return an RDD of grouped elements. Each group consists of a key and a sequence of elements mapping to that key. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.- Parameters:
f
- (undocumented)numPartitions
- (undocumented)kt
- (undocumented)- Returns:
- (undocumented)
- Note:
- This operation may be very expensive. If you are grouping in order to perform an
aggregation (such as a sum or average) over each key, using
PairRDDFunctions.aggregateByKey
orPairRDDFunctions.reduceByKey
will provide much better performance.
-
groupBy
public <K> RDD<scala.Tuple2<K,scala.collection.Iterable<T>>> groupBy(scala.Function1<T, K> f, Partitioner p, scala.reflect.ClassTag<K> kt, scala.math.Ordering<K> ord) Return an RDD of grouped items. Each group consists of a key and a sequence of elements mapping to that key. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.- Parameters:
f
- (undocumented)p
- (undocumented)kt
- (undocumented)ord
- (undocumented)- Returns:
- (undocumented)
- Note:
- This operation may be very expensive. If you are grouping in order to perform an
aggregation (such as a sum or average) over each key, using
PairRDDFunctions.aggregateByKey
orPairRDDFunctions.reduceByKey
will provide much better performance.
-
pipe
Return an RDD created by piping elements to a forked external process.- Parameters:
command
- (undocumented)- Returns:
- (undocumented)
-
pipe
Return an RDD created by piping elements to a forked external process.- Parameters:
command
- (undocumented)env
- (undocumented)- Returns:
- (undocumented)
-
pipe
public RDD<String> pipe(scala.collection.immutable.Seq<String> command, scala.collection.Map<String, String> env, scala.Function1<scala.Function1<String, scala.runtime.BoxedUnit>, scala.runtime.BoxedUnit> printPipeContext, scala.Function2<T, scala.Function1<String, scala.runtime.BoxedUnit>, scala.runtime.BoxedUnit> printRDDElement, boolean separateWorkingDir, int bufferSize, String encoding) Return an RDD created by piping elements to a forked external process. The resulting RDD is computed by executing the given process once per partition. All elements of each input partition are written to a process's stdin as lines of input separated by a newline. The resulting partition consists of the process's stdout output, with each line of stdout resulting in one element of the output partition. A process is invoked even for empty partitions.The print behavior can be customized by providing two functions.
- Parameters:
command
- command to run in forked process.env
- environment variables to set.printPipeContext
- Before piping elements, this function is called as an opportunity to pipe context data. Print line function (like out.println) will be passed as printPipeContext's parameter.printRDDElement
- Use this function to customize how to pipe elements. This function will be called with each RDD element as the 1st parameter, and the print line function (like out.println()) as the 2nd parameter. An example of pipe the RDD data of groupBy() in a streaming way, instead of constructing a huge String to concat all the elements:def printRDDElement(record:(String, Seq[String]), f:String=>Unit) = for (e <- record._2) {f(e)}
separateWorkingDir
- Use separate working directories for each task.bufferSize
- Buffer size for the stdin writer for the piped process.encoding
- Char encoding used for interacting (via stdin, stdout and stderr) with the piped process- Returns:
- the result RDD
-
mapPartitions
public <U> RDD<U> mapPartitions(scala.Function1<scala.collection.Iterator<T>, scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$6) Return a new RDD by applying a function to each partition of this RDD.preservesPartitioning
indicates whether the input function preserves the partitioner, which should befalse
unless this is a pair RDD and the input function doesn't modify the keys.- Parameters:
f
- (undocumented)preservesPartitioning
- (undocumented)evidence$6
- (undocumented)- Returns:
- (undocumented)
-
mapPartitionsWithIndex
public <U> RDD<U> mapPartitionsWithIndex(scala.Function2<Object, scala.collection.Iterator<T>, scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$9) Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.preservesPartitioning
indicates whether the input function preserves the partitioner, which should befalse
unless this is a pair RDD and the input function doesn't modify the keys.- Parameters:
f
- (undocumented)preservesPartitioning
- (undocumented)evidence$9
- (undocumented)- Returns:
- (undocumented)
-
mapPartitionsWithEvaluator
public <U> RDD<U> mapPartitionsWithEvaluator(PartitionEvaluatorFactory<T, U> evaluatorFactory, scala.reflect.ClassTag<U> evidence$10) Return a new RDD by applying an evaluator to each partition of this RDD. The given evaluator factory will be serialized and sent to executors, and each task will create an evaluator with the factory, and use the evaluator to transform the data of the input partition.- Parameters:
evaluatorFactory
- (undocumented)evidence$10
- (undocumented)- Returns:
- (undocumented)
-
zipPartitionsWithEvaluator
public <U> RDD<U> zipPartitionsWithEvaluator(RDD<T> rdd2, PartitionEvaluatorFactory<T, U> evaluatorFactory, scala.reflect.ClassTag<U> evidence$11) Zip this RDD's partitions with another RDD and return a new RDD by applying an evaluator to the zipped partitions. Assumes that the two RDDs have the *same number of partitions*, but does *not* require them to have the same number of elements in each partition.- Parameters:
rdd2
- (undocumented)evaluatorFactory
- (undocumented)evidence$11
- (undocumented)- Returns:
- (undocumented)
-
zip
Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc. Assumes that the two RDDs have the *same number of partitions* and the *same number of elements in each partition* (e.g. one was made through a map on the other).- Parameters:
other
- (undocumented)evidence$13
- (undocumented)- Returns:
- (undocumented)
-
zipPartitions
public <B,V> RDD<V> zipPartitions(RDD<B> rdd2, boolean preservesPartitioning, scala.Function2<scala.collection.Iterator<T>, scala.collection.Iterator<B>, scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$14, scala.reflect.ClassTag<V> evidence$15) Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD by applying a function to the zipped partitions. Assumes that all the RDDs have the *same number of partitions*, but does *not* require them to have the same number of elements in each partition.- Parameters:
rdd2
- (undocumented)preservesPartitioning
- (undocumented)f
- (undocumented)evidence$14
- (undocumented)evidence$15
- (undocumented)- Returns:
- (undocumented)
-
zipPartitions
-
zipPartitions
public <B,C, RDD<V> zipPartitionsV> (RDD<B> rdd2, RDD<C> rdd3, boolean preservesPartitioning, scala.Function3<scala.collection.Iterator<T>, scala.collection.Iterator<B>, scala.collection.Iterator<C>, scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$18, scala.reflect.ClassTag<C> evidence$19, scala.reflect.ClassTag<V> evidence$20) -
zipPartitions
public <B,C, RDD<V> zipPartitionsV> (RDD<B> rdd2, RDD<C> rdd3, scala.Function3<scala.collection.Iterator<T>, scala.collection.Iterator<B>, scala.collection.Iterator<C>, scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$21, scala.reflect.ClassTag<C> evidence$22, scala.reflect.ClassTag<V> evidence$23) -
zipPartitions
public <B,C, RDD<V> zipPartitionsD, V> (RDD<B> rdd2, RDD<C> rdd3, RDD<D> rdd4, boolean preservesPartitioning, scala.Function4<scala.collection.Iterator<T>, scala.collection.Iterator<B>, scala.collection.Iterator<C>, scala.collection.Iterator<D>, scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$24, scala.reflect.ClassTag<C> evidence$25, scala.reflect.ClassTag<D> evidence$26, scala.reflect.ClassTag<V> evidence$27) -
zipPartitions
public <B,C, RDD<V> zipPartitionsD, V> (RDD<B> rdd2, RDD<C> rdd3, RDD<D> rdd4, scala.Function4<scala.collection.Iterator<T>, scala.collection.Iterator<B>, scala.collection.Iterator<C>, scala.collection.Iterator<D>, scala.collection.Iterator<V>> f, scala.reflect.ClassTag<B> evidence$28, scala.reflect.ClassTag<C> evidence$29, scala.reflect.ClassTag<D> evidence$30, scala.reflect.ClassTag<V> evidence$31) -
foreach
Applies a function f to all elements of this RDD.- Parameters:
f
- (undocumented)
-
foreachPartition
public void foreachPartition(scala.Function1<scala.collection.Iterator<T>, scala.runtime.BoxedUnit> f) Applies a function f to each partition of this RDD.- Parameters:
f
- (undocumented)
-
collect
Return an array that contains all of the elements in this RDD.- Returns:
- (undocumented)
- Note:
- This method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.
-
toLocalIterator
Return an iterator that contains all of the elements in this RDD.The iterator will consume as much memory as the largest partition in this RDD.
- Returns:
- (undocumented)
- Note:
- This results in multiple Spark jobs, and if the input RDD is the result of a wide transformation (e.g. join with different partitioners), to avoid recomputing the input RDD should be cached first.
-
collect
Return an RDD that contains all matching values by applyingf
.- Parameters:
f
- (undocumented)evidence$32
- (undocumented)- Returns:
- (undocumented)
-
subtract
Return an RDD with the elements fromthis
that are not inother
.Uses
this
partitioner/partition size, because even ifother
is huge, the resulting RDD will be <= us.- Parameters:
other
- (undocumented)- Returns:
- (undocumented)
-
subtract
Return an RDD with the elements fromthis
that are not inother
.- Parameters:
other
- (undocumented)numPartitions
- (undocumented)- Returns:
- (undocumented)
-
subtract
Return an RDD with the elements fromthis
that are not inother
.- Parameters:
other
- (undocumented)p
- (undocumented)ord
- (undocumented)- Returns:
- (undocumented)
-
reduce
Reduces the elements of this RDD using the specified commutative and associative binary operator.- Parameters:
f
- (undocumented)- Returns:
- (undocumented)
-
treeReduce
Reduces the elements of this RDD in a multi-level tree pattern.- Parameters:
depth
- suggested depth of the tree (default: 2)f
- (undocumented)- Returns:
- (undocumented)
- See Also:
-
fold
Aggregate the elements of each partition, and then the results for all the partitions, using a given associative function and a neutral "zero value". The function op(t1, t2) is allowed to modify t1 and return it as its result value to avoid object allocation; however, it should not modify t2.This behaves somewhat differently from fold operations implemented for non-distributed collections in functional languages like Scala. This fold operation may be applied to partitions individually, and then fold those results into the final result, rather than apply the fold to each element sequentially in some defined ordering. For functions that are not commutative, the result may differ from that of a fold applied to a non-distributed collection.
- Parameters:
zeroValue
- the initial value for the accumulated result of each partition for theop
operator, and also the initial value for the combine results from different partitions for theop
operator - this will typically be the neutral element (e.g.Nil
for list concatenation or0
for summation)op
- an operator used to both accumulate results within a partition and combine results from different partitions- Returns:
- (undocumented)
-
aggregate
public <U> U aggregate(U zeroValue, scala.Function2<U, T, U> seqOp, scala.Function2<U, U, U> combOp, scala.reflect.ClassTag<U> evidence$33) Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of this RDD, T. Thus, we need one operation for merging a T into an U and one operation for merging two U's, as in scala.IterableOnce. Both of these functions are allowed to modify and return their first argument instead of creating a new U to avoid memory allocation.- Parameters:
zeroValue
- the initial value for the accumulated result of each partition for theseqOp
operator, and also the initial value for the combine results from different partitions for thecombOp
operator - this will typically be the neutral element (e.g.Nil
for list concatenation or0
for summation)seqOp
- an operator used to accumulate results within a partitioncombOp
- an associative operator used to combine results from different partitionsevidence$33
- (undocumented)- Returns:
- (undocumented)
-
treeAggregate
public <U> U treeAggregate(U zeroValue, scala.Function2<U, T, U> seqOp, scala.Function2<U, U, U> combOp, int depth, scala.reflect.ClassTag<U> evidence$34) Aggregates the elements of this RDD in a multi-level tree pattern. This method is semantically identical toaggregate(U, scala.Function2<U, T, U>, scala.Function2<U, U, U>, scala.reflect.ClassTag<U>)
.- Parameters:
depth
- suggested depth of the tree (default: 2)zeroValue
- (undocumented)seqOp
- (undocumented)combOp
- (undocumented)evidence$34
- (undocumented)- Returns:
- (undocumented)
-
treeAggregate
public <U> U treeAggregate(U zeroValue, scala.Function2<U, T, U> seqOp, scala.Function2<U, U, U> combOp, int depth, boolean finalAggregateOnExecutor, scala.reflect.ClassTag<U> evidence$35) treeAggregate(U, scala.Function2<U, T, U>, scala.Function2<U, U, U>, int, scala.reflect.ClassTag<U>)
with a parameter to do the final aggregation on the executor- Parameters:
finalAggregateOnExecutor
- do final aggregation on executorzeroValue
- (undocumented)seqOp
- (undocumented)combOp
- (undocumented)depth
- (undocumented)evidence$35
- (undocumented)- Returns:
- (undocumented)
-
count
public long count()Return the number of elements in the RDD.- Returns:
- (undocumented)
-
countApprox
Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.The confidence is the probability that the error bounds of the result will contain the true value. That is, if countApprox were called repeatedly with confidence 0.9, we would expect 90% of the results to contain the true count. The confidence must be in the range [0,1] or an exception will be thrown.
- Parameters:
timeout
- maximum time to wait for the job, in millisecondsconfidence
- the desired statistical confidence in the result- Returns:
- a potentially incomplete result, with error bounds
-
countByValue
Return the count of each unique value in this RDD as a local map of (value, count) pairs.- Parameters:
ord
- (undocumented)- Returns:
- (undocumented)
- Note:
- This method should only be used if the resulting map is expected to be small, as
the whole thing is loaded into the driver's memory.
To handle very large results, consider using
rdd.map(x => (x, 1L)).reduceByKey(_ + _)
, which returns an RDD[T, Long] instead of a map.
-
countByValueApprox
public PartialResult<scala.collection.Map<T,BoundedDouble>> countByValueApprox(long timeout, double confidence, scala.math.Ordering<T> ord) Approximate version of countByValue().- Parameters:
timeout
- maximum time to wait for the job, in millisecondsconfidence
- the desired statistical confidence in the resultord
- (undocumented)- Returns:
- a potentially incomplete result, with error bounds
-
countApproxDistinct
public long countApproxDistinct(int p, int sp) Return approximate number of distinct elements in the RDD.The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.
The relative accuracy is approximately
1.054 / sqrt(2^p)
. Setting a nonzero (sp
is greater thanp
) would trigger sparse representation of registers, which may reduce the memory consumption and increase accuracy when the cardinality is small.- Parameters:
p
- The precision value for the normal set.p
must be a value between 4 andsp
ifsp
is not zero (32 max).sp
- The precision value for the sparse set, between 0 and 32. Ifsp
equals 0, the sparse representation is skipped.- Returns:
- (undocumented)
-
countApproxDistinct
public long countApproxDistinct(double relativeSD) Return approximate number of distinct elements in the RDD.The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.
- Parameters:
relativeSD
- Relative accuracy. Smaller values create counters that require more space. It must be greater than 0.000017.- Returns:
- (undocumented)
-
zipWithIndex
Zips this RDD with its element indices. The ordering is first based on the partition index and then the ordering of items within each partition. So the first item in the first partition gets index 0, and the last item in the last partition receives the largest index.This is similar to Scala's zipWithIndex but it uses Long instead of Int as the index type. This method needs to trigger a spark job when this RDD contains more than one partitions.
- Returns:
- (undocumented)
- Note:
- Some RDDs, such as those returned by groupBy(), do not guarantee order of elements in a partition. The index assigned to each element is therefore not guaranteed, and may even change if the RDD is reevaluated. If a fixed ordering is required to guarantee the same index assignments, you should sort the RDD with sortByKey() or save it to a file.
-
zipWithUniqueId
Zips this RDD with generated unique Long ids. Items in the kth partition will get ids k, n+k, 2*n+k, ..., where n is the number of partitions. So there may exist gaps, but this method won't trigger a spark job, which is different fromzipWithIndex()
.- Returns:
- (undocumented)
- Note:
- Some RDDs, such as those returned by groupBy(), do not guarantee order of elements in a partition. The unique ID assigned to each element is therefore not guaranteed, and may even change if the RDD is reevaluated. If a fixed ordering is required to guarantee the same index assignments, you should sort the RDD with sortByKey() or save it to a file.
-
take
Take the first num elements of the RDD. It works by first scanning one partition, and use the results from that partition to estimate the number of additional partitions needed to satisfy the limit.- Parameters:
num
- (undocumented)- Returns:
- (undocumented)
- Note:
- This method should only be used if the resulting array is expected to be small, as
all the data is loaded into the driver's memory.
, Due to complications in the internal implementation, this method will raise an exception if called on an RDD of
Nothing
orNull
.
-
first
Return the first element in this RDD.- Returns:
- (undocumented)
-
top
Returns the top k (largest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering. This does the opposite oftakeOrdered(int,scala.math.Ordering<T>)
. For example:sc.parallelize(Seq(10, 4, 2, 12, 3)).top(1) // returns Array(12) sc.parallelize(Seq(2, 3, 4, 5, 6)).top(2) // returns Array(6, 5)
- Parameters:
num
- k, the number of top elements to returnord
- the implicit ordering for T- Returns:
- an array of top elements
- Note:
- This method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.
-
takeOrdered
Returns the first k (smallest) elements from this RDD as defined by the specified implicit Ordering[T] and maintains the ordering. This does the opposite oftop(int,scala.math.Ordering<T>)
. For example:sc.parallelize(Seq(10, 4, 2, 12, 3)).takeOrdered(1) // returns Array(2) sc.parallelize(Seq(2, 3, 4, 5, 6)).takeOrdered(2) // returns Array(2, 3)
- Parameters:
num
- k, the number of elements to returnord
- the implicit ordering for T- Returns:
- an array of top elements
- Note:
- This method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.
-
max
Returns the max of this RDD as defined by the implicit Ordering[T].- Parameters:
ord
- (undocumented)- Returns:
- the maximum element of the RDD
-
min
Returns the min of this RDD as defined by the implicit Ordering[T].- Parameters:
ord
- (undocumented)- Returns:
- the minimum element of the RDD
-
isEmpty
public boolean isEmpty()- Returns:
- true if and only if the RDD contains no elements at all. Note that an RDD may be empty even when it has at least 1 partition.
- Note:
- Due to complications in the internal implementation, this method will raise an
exception if called on an RDD of
Nothing
orNull
. This may be come up in practice because, for example, the type ofparallelize(Seq())
isRDD[Nothing]
. (parallelize(Seq())
should be avoided anyway in favor ofparallelize(Seq[T]())
.)
-
saveAsTextFile
Save this RDD as a text file, using string representations of elements.- Parameters:
path
- (undocumented)
-
saveAsTextFile
public void saveAsTextFile(String path, Class<? extends org.apache.hadoop.io.compress.CompressionCodec> codec) Save this RDD as a compressed text file, using string representations of elements.- Parameters:
path
- (undocumented)codec
- (undocumented)
-
saveAsObjectFile
Save this RDD as a SequenceFile of serialized objects.- Parameters:
path
- (undocumented)
-
keyBy
Creates tuples of the elements in this RDD by applyingf
.- Parameters:
f
- (undocumented)- Returns:
- (undocumented)
-
checkpoint
public void checkpoint()Mark this RDD for checkpointing. It will be saved to a file inside the checkpoint directory set withSparkContext#setCheckpointDir
and all references to its parent RDDs will be removed. This function must be called before any job has been executed on this RDD. It is strongly recommended that this RDD is persisted in memory, otherwise saving it on a file will require recomputation.The data is only checkpointed when
doCheckpoint()
is called, and this only happens at the end of the first action execution on this RDD. The final data that is checkpointed after the first action may be different from the data that was used during the action, due to non-determinism of the underlying operation and retries. If the purpose of the checkpoint is to achieve saving a deterministic snapshot of the data, an eager action may need to be called first on the RDD to trigger the checkpoint. -
localCheckpoint
Mark this RDD for local checkpointing using Spark's existing caching layer.This method is for users who wish to truncate RDD lineages while skipping the expensive step of replicating the materialized data in a reliable distributed file system. This is useful for RDDs with long lineages that need to be truncated periodically (e.g. GraphX).
Local checkpointing sacrifices fault-tolerance for performance. In particular, checkpointed data is written to ephemeral local storage in the executors instead of to a reliable, fault-tolerant storage. The effect is that if an executor fails during the computation, the checkpointed data may no longer be accessible, causing an irrecoverable job failure.
This is NOT safe to use with dynamic allocation, which removes executors along with their cached blocks. If you must use both features, you are advised to set
spark.dynamicAllocation.cachedExecutorIdleTimeout
to a high value.The checkpoint directory set through
SparkContext#setCheckpointDir
is not used.The data is only checkpointed when
doCheckpoint()
is called, and this only happens at the end of the first action execution on this RDD. The final data that is checkpointed after the first action may be different from the data that was used during the action, due to non-determinism of the underlying operation and retries. If the purpose of the checkpoint is to achieve saving a deterministic snapshot of the data, an eager action may need to be called first on the RDD to trigger the checkpoint.- Returns:
- (undocumented)
-
isCheckpointed
public boolean isCheckpointed()Return whether this RDD is checkpointed and materialized, either reliably or locally.- Returns:
- (undocumented)
-
getCheckpointFile
Gets the name of the directory to which this RDD was checkpointed. This is not defined if the RDD is checkpointed locally.- Returns:
- (undocumented)
-
cleanShuffleDependencies
public void cleanShuffleDependencies(boolean blocking) Removes an RDD's shuffles and it's non-persisted ancestors. When running without a shuffle service, cleaning up shuffle files enables downscaling. If you use the RDD after this call, you should checkpoint and materialize it first. If you are uncertain of what you are doing, please do not use this feature. Additional techniques for mitigating orphaned shuffle files: * Tuning the driver GC to be more aggressive, so the regular context cleaner is triggered * Setting an appropriate TTL for shuffle files to be auto cleaned- Parameters:
blocking
- (undocumented)
-
barrier
:: Experimental :: Marks the current stage as a barrier stage, where Spark must launch all tasks together. In case of a task failure, instead of only restarting the failed task, Spark will abort the entire stage and re-launch all tasks for this stage. The barrier execution mode feature is experimental and it only handles limited scenarios. Please read the linked SPIP and design docs to understand the limitations and future plans.- Returns:
- an
RDDBarrier
instance that provides actions within a barrier stage - See Also:
-
withResources
Specify a ResourceProfile to use when calculating this RDD. This is only supported on certain cluster managers and currently requires dynamic allocation to be enabled. It will result in new executors with the resources specified being acquired to calculate the RDD.- Parameters:
rp
- (undocumented)- Returns:
- (undocumented)
-
getResourceProfile
Get the ResourceProfile specified with this RDD or null if it wasn't specified.- Returns:
- the user specified ResourceProfile or null (for Java compatibility) if none was specified
-
context
TheSparkContext
that this RDD was created on. -
toDebugString
A description of this RDD and its recursive dependencies for debugging. -
toString
-
toJavaRDD
-