unionAll {SparkR}R Documentation

rbind

Description

Return a new SparkDataFrame containing the union of rows in this SparkDataFrame and another SparkDataFrame. This is equivalent to 'UNION ALL' in SQL. Note that this does not remove duplicate rows across the two SparkDataFrames.

Returns a new SparkDataFrame containing rows of all parameters.

Usage

unionAll(x, y)

rbind(..., deparse.level = 1)

## S4 method for signature 'SparkDataFrame,SparkDataFrame'
unionAll(x, y)

## S4 method for signature 'SparkDataFrame'
rbind(x, ..., deparse.level = 1)

Arguments

x

A SparkDataFrame

y

A SparkDataFrame

Value

A SparkDataFrame containing the result of the union.

See Also

Other SparkDataFrame functions: SparkDataFrame-class, [[, agg, arrange, as.data.frame, attach, cache, collect, colnames, coltypes, columns, count, dapply, describe, dim, distinct, dropDuplicates, dropna, drop, dtypes, except, explain, filter, first, group_by, head, histogram, insertInto, intersect, isLocal, join, limit, merge, mutate, ncol, persist, printSchema, registerTempTable, rename, repartition, sample, saveAsTable, selectExpr, select, showDF, show, str, take, unpersist, withColumn, write.df, write.jdbc, write.json, write.parquet, write.text

Examples

## Not run: 
##D sc <- sparkR.init()
##D sqlContext <- sparkRSQL.init(sc)
##D df1 <- read.json(sqlContext, path)
##D df2 <- read.json(sqlContext, path2)
##D unioned <- unionAll(df, df2)
## End(Not run)

[Package SparkR version 2.0.0 Index]