pyspark.RDD.mapPartitions

RDD.mapPartitions(f, preservesPartitioning=False)[source]

Return a new RDD by applying a function to each partition of this RDD.

Examples

>>> rdd = sc.parallelize([1, 2, 3, 4], 2)
>>> def f(iterator): yield sum(iterator)
>>> rdd.mapPartitions(f).collect()
[3, 7]