pyspark.RDDBarrier.mapPartitions¶
-
RDDBarrier.
mapPartitions
(f: Callable[[Iterable[T]], Iterable[U]], preservesPartitioning: bool = False) → pyspark.rdd.RDD[U][source]¶ Returns a new RDD by applying a function to each partition of the wrapped RDD, where tasks are launched together in a barrier stage. The interface is the same as
RDD.mapPartitions()
. Please see the API doc there.New in version 2.4.0.
- Parameters
- ffunction
a function to run on each partition of the RDD
- preservesPartitioningbool, optional, default False
indicates whether the input function preserves the partitioner, which should be False unless this is a pair RDD and the input
- Returns
See also
Notes
This API is experimental
Examples
>>> rdd = sc.parallelize([1, 2, 3, 4], 2) >>> def f(iterator): yield sum(iterator) ... >>> barrier = rdd.barrier() >>> barrier <pyspark.rdd.RDDBarrier ...> >>> barrier.mapPartitions(f).collect() [3, 7]