pyspark.pandas.DataFrame.spark.coalesce¶
- 
spark.coalesce(num_partitions: int) → ps.DataFrame¶
- Returns a new DataFrame that has exactly num_partitions partitions. - Note - This operation results in a narrow dependency, e.g. if you go from 1000 partitions to 100 partitions, there will not be a shuffle, instead each of the 100 new partitions will claim 10 of the current partitions. If a larger number of partitions is requested, it will stay at the current number of partitions. However, if you’re doing a drastic coalesce, e.g. to num_partitions = 1, this may result in your computation taking place on fewer nodes than you like (e.g. one node in the case of num_partitions = 1). To avoid this, you can call repartition(). This will add a shuffle step, but means the current upstream partitions will be executed in parallel (per whatever the current partitioning is). - Parameters
- num_partitionsint
- The target number of partitions. 
 
- Returns
- DataFrame
 
 - Examples - >>> psdf = ps.DataFrame({"age": [5, 5, 2, 2], ... "name": ["Bob", "Bob", "Alice", "Alice"]}).set_index("age") >>> psdf.sort_index() name age 2 Alice 2 Alice 5 Bob 5 Bob >>> new_psdf = psdf.spark.coalesce(1) >>> new_psdf.to_spark().rdd.getNumPartitions() 1 >>> new_psdf.sort_index() name age 2 Alice 2 Alice 5 Bob 5 Bob