pyspark.sql.DataFrame.repartitionByRange

DataFrame.repartitionByRange(numPartitions: Union[int, ColumnOrName], *cols: ColumnOrName) → DataFrame[source]

Returns a new DataFrame partitioned by the given partitioning expressions. The resulting DataFrame is range partitioned.

At least one partition-by expression must be specified. When no explicit sort order is specified, “ascending nulls first” is assumed.

New in version 2.4.0.

Parameters
numPartitionsint

can be an int to specify the target number of partitions or a Column. If it is a Column, it will be used as the first partitioning column. If not specified, the default number of partitions is used.

colsstr or Column

partitioning columns.

Notes

Due to performance reasons this method uses sampling to estimate the ranges. Hence, the output may not be consistent, since sampling can return different values. The sample size can be controlled by the config spark.sql.execution.rangeExchange.sampleSizePerPartition.

Examples

>>> df.repartitionByRange(2, "age").rdd.getNumPartitions()
2
>>> df.show()
+---+-----+
|age| name|
+---+-----+
|  2|Alice|
|  5|  Bob|
+---+-----+
>>> df.repartitionByRange(1, "age").rdd.getNumPartitions()
1
>>> data = df.repartitionByRange("age")
>>> df.show()
+---+-----+
|age| name|
+---+-----+
|  2|Alice|
|  5|  Bob|
+---+-----+