Skip to contents

The following options for repartition are possible:

  • 1. Return a new SparkDataFrame that has exactly numPartitions.

  • 2. Return a new SparkDataFrame hash partitioned by the given columns into numPartitions.

  • 3. Return a new SparkDataFrame hash partitioned by the given column(s), using spark.sql.shuffle.partitions as number of partitions.

Usage

repartition(x, ...)

# S4 method for SparkDataFrame
repartition(x, numPartitions = NULL, col = NULL, ...)

Arguments

x

a SparkDataFrame.

...

additional column(s) to be used in the partitioning.

numPartitions

the number of partitions to use.

col

the column by which the partitioning will be performed.

Note

repartition since 1.4.0

Examples

if (FALSE) {
sparkR.session()
path <- "path/to/file.json"
df <- read.json(path)
newDF <- repartition(df, 2L)
newDF <- repartition(df, numPartitions = 2L)
newDF <- repartition(df, col = df$"col1", df$"col2")
newDF <- repartition(df, 3L, col = df$"col1", df$"col2")
}