pyspark.RDD.map

RDD.map(f: Callable[[T], U], preservesPartitioning: bool = False) → pyspark.rdd.RDD[U][source]

Return a new RDD by applying a function to each element of this RDD.

New in version 0.7.0.

Parameters
ffunction

a function to run on each element of the RDD

preservesPartitioningbool, optional, default False

indicates whether the input function preserves the partitioner, which should be False unless this is a pair RDD and the input

Returns
RDD

a new RDD by applying a function to all elements

Examples

>>> rdd = sc.parallelize(["b", "a", "c"])
>>> sorted(rdd.map(lambda x: (x, 1)).collect())
[('a', 1), ('b', 1), ('c', 1)]