pyspark.sql.functions.bucket

pyspark.sql.functions.bucket(numBuckets: Union[pyspark.sql.column.Column, int], col: ColumnOrName) → pyspark.sql.column.Column[source]

Partition transform function: A transform for any type that partitions by a hash of the input column.

New in version 3.1.0.

Notes

This function can be used only in combination with partitionedBy() method of the DataFrameWriterV2.

Examples

>>> df.writeTo("catalog.db.table").partitionedBy(  
...     bucket(42, "ts")
... ).createOrReplace()