pyspark.RDD.mapPartitionsWithIndex

RDD.mapPartitionsWithIndex(f: Callable[[int, Iterable[T]], Iterable[U]], preservesPartitioning: bool = False) → pyspark.rdd.RDD[U][source]

Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.

New in version 0.7.0.

Parameters
ffunction

a function to run on each partition of the RDD

preservesPartitioningbool, optional, default False

indicates whether the input function preserves the partitioner, which should be False unless this is a pair RDD and the input

Returns
RDD

a new RDD by applying a function to each partition

Examples

>>> rdd = sc.parallelize([1, 2, 3, 4], 4)
>>> def f(splitIndex, iterator): yield splitIndex
...
>>> rdd.mapPartitionsWithIndex(f).sum()
6