Skip to contents

Save the contents of a SparkDataFrame as a Parquet file, preserving the schema. Files written out with this method can be read back in as a SparkDataFrame using read.parquet().

Usage

write.parquet(x, path, ...)

# S4 method for SparkDataFrame,character
write.parquet(x, path, mode = "error", ...)

Arguments

x

A SparkDataFrame

path

The directory where the file is saved

...

additional argument(s) passed to the method. You can find the Parquet-specific options for writing Parquet files in https://spark.apache.org/docs/latest/sql-data-sources-parquet.html#data-source-optionData Source Option in the version you use.

mode

one of 'append', 'overwrite', 'error', 'errorifexists', 'ignore' save mode (it is 'error' by default)

Note

write.parquet since 1.6.0

Examples

if (FALSE) {
sparkR.session()
path <- "path/to/file.json"
df <- read.json(path)
write.parquet(df, "/tmp/sparkr-tmp1/")
}