filter {SparkR}R Documentation

Filter

Description

Filter the rows of a SparkDataFrame according to a given condition.

Usage

filter(x, condition)

where(x, condition)

## S4 method for signature 'SparkDataFrame,characterOrColumn'
filter(x, condition)

## S4 method for signature 'SparkDataFrame,characterOrColumn'
where(x, condition)

Arguments

x

A SparkDataFrame to be sorted.

condition

The condition to filter on. This may either be a Column expression or a string containing a SQL statement

Value

A SparkDataFrame containing only the rows that meet the condition.

See Also

Other SparkDataFrame functions: SparkDataFrame-class, [[, agg, arrange, as.data.frame, attach, cache, collect, colnames, coltypes, columns, count, dapply, describe, dim, distinct, dropDuplicates, dropna, drop, dtypes, except, explain, first, group_by, head, histogram, insertInto, intersect, isLocal, join, limit, merge, mutate, ncol, persist, printSchema, registerTempTable, rename, repartition, sample, saveAsTable, selectExpr, select, showDF, show, str, take, unionAll, unpersist, withColumn, write.df, write.jdbc, write.json, write.parquet, write.text

Other SparkDataFrame functions: SparkDataFrame-class, [[, agg, arrange, as.data.frame, attach, cache, collect, colnames, coltypes, columns, count, dapply, describe, dim, distinct, dropDuplicates, dropna, drop, dtypes, except, explain, first, group_by, head, histogram, insertInto, intersect, isLocal, join, limit, merge, mutate, ncol, persist, printSchema, registerTempTable, rename, repartition, sample, saveAsTable, selectExpr, select, showDF, show, str, take, unionAll, unpersist, withColumn, write.df, write.jdbc, write.json, write.parquet, write.text

Other subsetting functions: [[, select

Examples

## Not run: 
##D sc <- sparkR.init()
##D sqlContext <- sparkRSQL.init(sc)
##D path <- "path/to/file.json"
##D df <- read.json(sqlContext, path)
##D filter(df, "col1 > 0")
##D filter(df, df$col2 != "abcdefg")
## End(Not run)

[Package SparkR version 2.0.0 Index]