pyspark.RDD.map

RDD.map(f, preservesPartitioning=False)[source]

Return a new RDD by applying a function to each element of this RDD.

Examples

>>> rdd = sc.parallelize(["b", "a", "c"])
>>> sorted(rdd.map(lambda x: (x, 1)).collect())
[('a', 1), ('b', 1), ('c', 1)]