pyspark.RDD.map

RDD.map(f: Callable[[T], U], preservesPartitioning: bool = False) → pyspark.rdd.RDD[U][source]

Return a new RDD by applying a function to each element of this RDD.

Examples

>>> rdd = sc.parallelize(["b", "a", "c"])
>>> sorted(rdd.map(lambda x: (x, 1)).collect())
[('a', 1), ('b', 1), ('c', 1)]