pyspark.RDDBarrier.mapPartitionsWithIndex

RDDBarrier.mapPartitionsWithIndex(f: Callable[[int, Iterable[T]], Iterable[U]], preservesPartitioning: bool = False) → pyspark.rdd.RDD[U][source]

Returns a new RDD by applying a function to each partition of the wrapped RDD, while tracking the index of the original partition. And all tasks are launched together in a barrier stage. The interface is the same as RDD.mapPartitionsWithIndex(). Please see the API doc there.

New in version 3.0.0.

Notes

This API is experimental