write.df {SparkR}R Documentation

Save the contents of SparkDataFrame to a data source.


The data source is specified by the source and a set of options (...). If source is not specified, the default data source configured by spark.sql.sources.default will be used.


write.df(df, path = NULL, ...)

saveDF(df, path, source = NULL, mode = "error", ...)

write.df(df, path = NULL, ...)

## S4 method for signature 'SparkDataFrame'
write.df(df, path = NULL, source = NULL,
  mode = "error", ...)

## S4 method for signature 'SparkDataFrame,character'
saveDF(df, path, source = NULL,
  mode = "error", ...)



a SparkDataFrame.


a name for the table.


additional argument(s) passed to the method.


a name for external data source.


one of 'append', 'overwrite', 'error', 'errorifexists', 'ignore' save mode (it is 'error' by default)


Additionally, mode is used to specify the behavior of the save operation when data already exists in the data source. There are four modes:


write.df since 1.4.0

saveDF since 1.4.0

See Also

Other SparkDataFrame functions: SparkDataFrame-class, agg, alias, arrange, as.data.frame, attach,SparkDataFrame-method, broadcast, cache, checkpoint, coalesce, collect, colnames, coltypes, createOrReplaceTempView, crossJoin, cube, dapplyCollect, dapply, describe, dim, distinct, dropDuplicates, dropna, drop, dtypes, except, explain, filter, first, gapplyCollect, gapply, getNumPartitions, group_by, head, hint, histogram, insertInto, intersect, isLocal, isStreaming, join, limit, localCheckpoint, merge, mutate, ncol, nrow, persist, printSchema, randomSplit, rbind, registerTempTable, rename, repartition, rollup, sample, saveAsTable, schema, selectExpr, select, showDF, show, storageLevel, str, subset, summary, take, toJSON, unionByName, union, unpersist, withColumn, withWatermark, with, write.jdbc, write.json, write.orc, write.parquet, write.stream, write.text


## Not run: 
##D sparkR.session()
##D path <- "path/to/file.json"
##D df <- read.json(path)
##D write.df(df, "myfile", "parquet", "overwrite")
##D saveDF(df, parquetPath2, "parquet", mode = "append", mergeSchema = TRUE)
## End(Not run)

[Package SparkR version 2.3.0 Index]