write.df {SparkR}R Documentation

Save the contents of the DataFrame to a data source

Description

The data source is specified by the 'source' and a set of options (...). If 'source' is not specified, the default data source configured by spark.sql.sources.default will be used.

Usage

## S4 method for signature 'DataFrame,character'
write.df(df, path, source = NULL,
  mode = "error", ...)

## S4 method for signature 'DataFrame,character'
saveDF(df, path, source = NULL,
  mode = "error", ...)

write.df(df, path, ...)

saveDF(df, path, source = NULL, mode = "error", ...)

write.df(df, path, ...)

Arguments

df

A SparkSQL DataFrame

path

A name for the table

source

A name for external data source

mode

One of 'append', 'overwrite', 'error', 'ignore' save mode (it is 'error' by default)

Details

Additionally, mode is used to specify the behavior of the save operation when data already exists in the data source. There are four modes:
append: Contents of this DataFrame are expected to be appended to existing data.
overwrite: Existing data is expected to be overwritten by the contents of this DataFrame.
error: An exception is expected to be thrown.
ignore: The save operation is expected to not save the contents of the DataFrame and to not change the existing data.

See Also

Other DataFrame functions: $, $<-, select, select, select,DataFrame,Column-method, select,DataFrame,list-method, selectExpr; DataFrame-class, dataFrame, groupedData; [, [, [[, subset; agg, agg, count,GroupedData-method, summarize, summarize; arrange, arrange, arrange, orderBy, orderBy; as.data.frame, as.data.frame,DataFrame-method; attach, attach,DataFrame-method; cache; collect; colnames, colnames, colnames<-, colnames<-, columns, names, names<-; coltypes, coltypes, coltypes<-, coltypes<-; columns, dtypes, printSchema, schema, schema; count, nrow; describe, describe, describe, summary, summary, summary,PipelineModel-method; dim; distinct, unique; dropna, dropna, fillna, fillna, na.omit, na.omit; dtypes; except, except; explain, explain; filter, filter, where, where; first, first; groupBy, groupBy, group_by, group_by; head; insertInto, insertInto; intersect, intersect; isLocal, isLocal; join; limit, limit; merge, merge; mutate, mutate, transform, transform; ncol; persist; printSchema; rbind, rbind, unionAll, unionAll; registerTempTable, registerTempTable; rename, rename, withColumnRenamed, withColumnRenamed; repartition; sample, sample, sample_frac, sample_frac; saveAsParquetFile, saveAsParquetFile, write.parquet, write.parquet; saveAsTable, saveAsTable; selectExpr; showDF, showDF; show, show, show,GroupedData-method; str; take; unpersist; withColumn, withColumn; write.json, write.json; write.text, write.text

Examples

## Not run: 
##D sc <- sparkR.init()
##D sqlContext <- sparkRSQL.init(sc)
##D path <- "path/to/file.json"
##D df <- read.json(sqlContext, path)
##D write.df(df, "myfile", "parquet", "overwrite")
##D saveDF(df, parquetPath2, "parquet", mode = saveMode, mergeSchema = mergeSchema)
## End(Not run)

[Package SparkR version 1.6.1 Index]