Package org.apache.spark.rdd
Class RDDBarrier<T>
Object
org.apache.spark.rdd.RDDBarrier<T>
:: Experimental ::
 Wraps an RDD in a barrier stage, which forces Spark to launch tasks of this stage together.
 
RDDBarrier instances are created by
 RDD.barrier().- 
Method SummaryModifier and TypeMethodDescription<S> RDD<S>mapPartitions(scala.Function1<scala.collection.Iterator<T>, scala.collection.Iterator<S>> f, boolean preservesPartitioning, scala.reflect.ClassTag<S> evidence$2) :: Experimental :: Returns a new RDD by applying a function to each partition of the wrapped RDD, where tasks are launched together in a barrier stage.<U> RDD<U>mapPartitionsWithEvaluator(PartitionEvaluatorFactory<T, U> evaluatorFactory, scala.reflect.ClassTag<U> evidence$4) Return a new RDD by applying an evaluator to each partition of the wrapped RDD.<S> RDD<S>mapPartitionsWithIndex(scala.Function2<Object, scala.collection.Iterator<T>, scala.collection.Iterator<S>> f, boolean preservesPartitioning, scala.reflect.ClassTag<S> evidence$3) :: Experimental :: Returns a new RDD by applying a function to each partition of the wrapped RDD, while tracking the index of the original partition.
- 
Method Details- 
mapPartitionspublic <S> RDD<S> mapPartitions(scala.Function1<scala.collection.Iterator<T>, scala.collection.Iterator<S>> f, boolean preservesPartitioning, scala.reflect.ClassTag<S> evidence$2) :: Experimental :: Returns a new RDD by applying a function to each partition of the wrapped RDD, where tasks are launched together in a barrier stage. The interface is the same asRDD.mapPartitions(scala.Function1<scala.collection.Iterator<T>, scala.collection.Iterator<U>>, boolean, scala.reflect.ClassTag<U>). Please see the API doc there.- Parameters:
- f- (undocumented)
- preservesPartitioning- (undocumented)
- evidence$2- (undocumented)
- Returns:
- (undocumented)
- See Also:
 
- 
mapPartitionsWithEvaluatorpublic <U> RDD<U> mapPartitionsWithEvaluator(PartitionEvaluatorFactory<T, U> evaluatorFactory, scala.reflect.ClassTag<U> evidence$4) Return a new RDD by applying an evaluator to each partition of the wrapped RDD. The given evaluator factory will be serialized and sent to executors, and each task will create an evaluator with the factory, and use the evaluator to transform the data of the input partition.- Parameters:
- evaluatorFactory- (undocumented)
- evidence$4- (undocumented)
- Returns:
- (undocumented)
 
- 
mapPartitionsWithIndexpublic <S> RDD<S> mapPartitionsWithIndex(scala.Function2<Object, scala.collection.Iterator<T>, scala.collection.Iterator<S>> f, boolean preservesPartitioning, scala.reflect.ClassTag<S> evidence$3) :: Experimental :: Returns a new RDD by applying a function to each partition of the wrapped RDD, while tracking the index of the original partition. And all tasks are launched together in a barrier stage. The interface is the same asRDD.mapPartitionsWithIndex(scala.Function2<java.lang.Object, scala.collection.Iterator<T>, scala.collection.Iterator<U>>, boolean, scala.reflect.ClassTag<U>). Please see the API doc there.- Parameters:
- f- (undocumented)
- preservesPartitioning- (undocumented)
- evidence$3- (undocumented)
- Returns:
- (undocumented)
- See Also:
 
 
-