select {SparkR}R Documentation

Select

Description

Selects a set of columns with names or Column expressions.

Usage

select(x, col, ...)

## S4 method for signature 'SparkDataFrame'
x$name

## S4 replacement method for signature 'SparkDataFrame'
x$name <- value

## S4 method for signature 'SparkDataFrame,character'
select(x, col, ...)

## S4 method for signature 'SparkDataFrame,Column'
select(x, col, ...)

## S4 method for signature 'SparkDataFrame,list'
select(x, col)

Arguments

x

a SparkDataFrame.

col

a list of columns or single Column or name.

...

additional column(s) if only one column is specified in col. If more than one column is assigned in col, ... should be left empty.

name

name of a Column (without being wrapped by "").

value

a Column or an atomic vector in the length of 1 as literal value, or NULL. If NULL, the specified Column is dropped.

Value

A new SparkDataFrame with selected columns.

Note

$ since 1.4.0

$<- since 1.4.0

select(SparkDataFrame, character) since 1.4.0

select(SparkDataFrame, Column) since 1.4.0

select(SparkDataFrame, list) since 1.4.0

See Also

Other SparkDataFrame functions: SparkDataFrame-class, agg(), alias(), arrange(), as.data.frame(), attach,SparkDataFrame-method, broadcast(), cache(), checkpoint(), coalesce(), collect(), colnames(), coltypes(), createOrReplaceTempView(), crossJoin(), cube(), dapplyCollect(), dapply(), describe(), dim(), distinct(), dropDuplicates(), dropna(), drop(), dtypes(), exceptAll(), except(), explain(), filter(), first(), gapplyCollect(), gapply(), getNumPartitions(), group_by(), head(), hint(), histogram(), insertInto(), intersectAll(), intersect(), isLocal(), isStreaming(), join(), limit(), localCheckpoint(), merge(), mutate(), ncol(), nrow(), persist(), printSchema(), randomSplit(), rbind(), rename(), repartitionByRange(), repartition(), rollup(), sample(), saveAsTable(), schema(), selectExpr(), showDF(), show(), storageLevel(), str(), subset(), summary(), take(), toJSON(), unionAll(), unionByName(), union(), unpersist(), withColumn(), withWatermark(), with(), write.df(), write.jdbc(), write.json(), write.orc(), write.parquet(), write.stream(), write.text()

Other subsetting functions: filter(), subset()

Examples

## Not run: 
##D   select(df, "*")
##D   select(df, "col1", "col2")
##D   select(df, df$name, df$age + 1)
##D   select(df, c("col1", "col2"))
##D   select(df, list(df$name, df$age + 1))
##D   # Similar to R data frames columns can also be selected using $
##D   df[,df$age]
## End(Not run)

[Package SparkR version 3.0.1 Index]