SparkR (R on Spark)
- Starting Up: SparkSession
- Starting Up from RStudio
- Creating SparkDataFrames
- SparkDataFrame Operations
- Selecting rows, columns
- Grouping, Aggregation
- Operating on Columns
- Applying User-Defined Function
- Running SQL Queries from SparkR
- Machine Learning
- Data type mapping between R and Spark
- Structured Streaming
- R Function Name Conflicts
- Migration Guide
SparkR is an R package that provides a light-weight frontend to use Apache Spark from R. In Spark 2.4.3, SparkR provides a distributed data frame implementation that supports operations like selection, filtering, aggregation etc. (similar to R data frames, dplyr) but on large datasets. SparkR also supports distributed machine learning using MLlib.
A SparkDataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R, but with richer optimizations under the hood. SparkDataFrames can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing local R data frames.
All of the examples on this page use sample data included in R or the Spark distribution and can be run using the
Starting Up: SparkSession
The entry point into SparkR is the
SparkSession which connects your R program to a Spark cluster.
You can create a
sparkR.session and pass in options such as the application name, any spark packages depended on, etc. Further, you can also work with SparkDataFrames via
SparkSession. If you are working from the
sparkR shell, the
SparkSession should already be created for you, and you would not need to call
Starting Up from RStudio
You can also start SparkR from RStudio. You can connect your R program to a Spark cluster from
RStudio, R shell, Rscript or other R IDEs. To start, make sure SPARK_HOME is set in environment
(you can check Sys.getenv),
load the SparkR package, and call
sparkR.session as below. It will check for the Spark installation, and, if not found, it will be downloaded and cached automatically. Alternatively, you can also run
In addition to calling
you could also specify certain Spark driver properties. Normally these
Application properties and
Runtime Environment cannot be set programmatically, as the
driver JVM process would have been started, in this case SparkR takes care of this for you. To set
them, pass them as you would other configuration properties in the
sparkConfig argument to
The following Spark driver properties can be set in
sparkR.session from RStudio:
|Property Name||Property group|
From local data frames
The simplest way to create a data frame is to convert a local R data frame into a SparkDataFrame. Specifically, we can use
createDataFrame and pass in the local R data frame to create a SparkDataFrame. As an example, the following creates a
SparkDataFrame based using the
faithful dataset from R.
From Data Sources
SparkR supports operating on a variety of data sources through the
SparkDataFrame interface. This section describes the general methods for loading and saving data using Data Sources. You can check the Spark SQL programming guide for more specific options that are available for the built-in data sources.
The general method for creating SparkDataFrames from data sources is
read.df. This method takes in the path for the file to load and the type of data source, and the currently active SparkSession will be used automatically.
SparkR supports reading JSON, CSV and Parquet files natively, and through packages available from sources like Third Party Projects, you can find data source connectors for popular file formats like Avro. These packages can either be added by
sparkR commands, or if initializing SparkSession with
sparkPackages parameter when in an interactive R shell or from RStudio.
We can see how to use data sources using an example JSON input file. Note that the file that is used here is not a typical JSON file. Each line in the file must contain a separate, self-contained valid JSON object. For more information, please see JSON Lines text format, also called newline-delimited JSON. As a consequence, a regular multi-line JSON file will most often fail.
The data sources API natively supports CSV formatted input files. For more information please refer to SparkR read.df API documentation.
The data sources API can also be used to save out SparkDataFrames into multiple file formats. For example, we can save the SparkDataFrame from the previous example
to a Parquet file using
From Hive tables
You can also create SparkDataFrames from Hive tables. To do this we will need to create a SparkSession with Hive support which can access tables in the Hive MetaStore. Note that Spark should have been built with Hive support and more details can be found in the SQL programming guide. In SparkR, by default it will attempt to create a SparkSession with Hive support enabled (
enableHiveSupport = TRUE).
SparkDataFrames support a number of functions to do structured data processing. Here we include some basic examples and a complete list can be found in the API docs:
Selecting rows, columns
SparkR data frames support a number of commonly used functions to aggregate data after grouping. For example, we can compute a histogram of the
waiting time in the
faithful dataset as shown below
In addition to standard aggregations, SparkR supports OLAP cube operators
Operating on Columns
SparkR also provides a number of functions that can directly applied to columns for data processing and during aggregation. The example below shows the use of basic arithmetic functions.
Applying User-Defined Function
In SparkR, we support several kinds of User-Defined Functions:
Run a given function on a large dataset using
Apply a function to each partition of a
SparkDataFrame. The function to be applied to each partition of the
and should have only one parameter, to which a
data.frame corresponds to each partition will be passed. The output of function should be a
data.frame. Schema specifies the row format of the resulting a
SparkDataFrame. It must match to data types of returned value.
dapply, apply a function to each partition of a
SparkDataFrame and collect the result back. The output of function
should be a
data.frame. But, Schema is not required to be passed. Note that
dapplyCollect can fail if the output of UDF run on all the partition cannot be pulled to the driver and fit in driver memory.
Run a given function on a large dataset grouping by input column(s) and using
Apply a function to each group of a
SparkDataFrame. The function is to be applied to each group of the
SparkDataFrame and should have only two parameters: grouping key and R
data.frame corresponding to
that key. The groups are chosen from
The output of function should be a
data.frame. Schema specifies the row format of the resulting
SparkDataFrame. It must represent R function’s output schema on the basis of Spark data types. The column names of the returned
data.frame are set by user.
gapply, applies a function to each partition of a
SparkDataFrame and collect the result back to R data.frame. The output of the function should be a
data.frame. But, the schema is not required to be passed. Note that
gapplyCollect can fail if the output of UDF run on all the partition cannot be pulled to the driver and fit in driver memory.
Run local R functions distributed using
lapply in native R,
spark.lapply runs a function over a list of elements and distributes the computations with Spark.
Applies a function in a manner that is similar to
lapply to elements of a list. The results of all the computations
should fit in a single machine. If that is not the case they can do something like
df <- createDataFrame(list) and then use
Running SQL Queries from SparkR
A SparkDataFrame can also be registered as a temporary view in Spark SQL and that allows you to run SQL queries over its data.
sql function enables applications to run SQL queries programmatically and returns the result as a
SparkR supports the following machine learning algorithms currently:
Multilayer Perceptron (MLP)
Linear Support Vector Machine
Accelerated Failure Time (AFT) Survival Model
Generalized Linear Model (GLM)
Decision Tree for
Gradient Boosted Trees for
Random Forest for
Gaussian Mixture Model (GMM)
Latent Dirichlet Allocation (LDA)
Frequent Pattern Mining
Under the hood, SparkR uses MLlib to train the model. Please refer to the corresponding section of MLlib user guide for example code.
Users can call
summary to print a summary of the fitted model, predict to make predictions on new data, and write.ml/read.ml to save/load fitted models.
SparkR supports a subset of the available R formula operators for model fitting, including ‘~’, ‘.’, ‘:’, ‘+’, and ‘-‘.
The following example shows how to save/load a MLlib model by SparkR.
training <- read.df("data/mllib/sample_multiclass_classification_data.txt", source = "libsvm") # Fit a generalized linear model of family "gaussian" with spark.glm df_list <- randomSplit(training, c(7,3), 2) gaussianDF <- df_list[] gaussianTestDF <- df_list[] gaussianGLM <- spark.glm(gaussianDF, label ~ features, family = "gaussian") # Save and then load a fitted MLlib model modelPath <- tempfile(pattern = "ml", fileext = ".tmp") write.ml(gaussianGLM, modelPath) gaussianGLM2 <- read.ml(modelPath) # Check model summary summary(gaussianGLM2) # Check model prediction gaussianPredictions <- predict(gaussianGLM2, gaussianTestDF) head(gaussianPredictions) unlink(modelPath)
Data type mapping between R and Spark
SparkR supports the Structured Streaming API. Structured Streaming is a scalable and fault-tolerant stream processing engine built on the Spark SQL engine. For more information see the R API on the Structured Streaming Programming Guide
R Function Name Conflicts
When loading and attaching a new package in R, it is possible to have a name conflict, where a function is masking another function.
The following functions are masked by the SparkR package:
|Masked function||How to Access|
Since part of SparkR is modeled on the
dplyr package, certain functions in SparkR share the same names with those in
dplyr. Depending on the load order of the two packages, some functions from the package loaded first are masked by those in the package loaded after. In such case, prefix such calls with the package name, for instance,
You can inspect the search path in R with
Upgrading From SparkR 1.5.x to 1.6.x
- Before Spark 1.6.0, the default mode for writes was
append. It was changed in Spark 1.6.0 to
errorto match the Scala API.
- SparkSQL converts
NAin R to
Upgrading From SparkR 1.6.x to 2.0
- The method
tablehas been removed and replaced by
- The class
DataFramehas been renamed to
SparkDataFrameto avoid name conflicts.
HiveContexthave been deprecated to be replaced by
SparkSession. Instead of
sparkR.session()in its place to instantiate the SparkSession. Once that is done, that currently active SparkSession will be used for SparkDataFrame operations.
- The parameter
sparkExecutorEnvis not supported by
sparkR.session. To set environment for the executors, set Spark config properties with the prefix “spark.executorEnv.VAR_NAME”, for example, “spark.executorEnv.PATH”
sqlContextparameter is no longer required for these functions:
- The method
registerTempTablehas been deprecated to be replaced by
- The method
dropTempTablehas been deprecated to be replaced by
scSparkContext parameter is no longer required for these functions:
Upgrading to SparkR 2.1.0
joinno longer performs Cartesian Product by default, use
Upgrading to SparkR 2.2.0
numPartitionsparameter has been added to
as.DataFrame. When splitting the data, the partition position calculation has been made to match the one in Scala.
- The method
createExternalTablehas been deprecated to be replaced by
createTable. Either methods can be called to create external or managed table. Additional catalog methods have also been added.
- By default, derby.log is now saved to
tempdir(). This will be created when instantiating the SparkSession with
spark.ldawas not setting the optimizer correctly. It has been corrected.
- Several model summary outputs are updated to have
matrix. This includes
spark.glm. Model summary outputs for
spark.gaussianMixturehave added log-likelihood as
Upgrading to SparkR 2.3.0
stringsAsFactorsparameter was previously ignored with
collect, for example, in
collect(createDataFrame(iris), stringsAsFactors = TRUE)). It has been corrected.
summary, option for statistics to compute has been added. Its output is changed from that from
- A warning can be raised if versions of SparkR package and the Spark JVM do not match.
Upgrading to SparkR 2.3.1 and above
- In SparkR 2.3.0 and earlier, the
substrmethod was wrongly subtracted by one and considered as 0-based. This can lead to inconsistent substring results and also does not match with the behaviour with
substrin R. In version 2.3.1 and later, it has been fixed so the
substrmethod is now 1-base. As an example,
substr(lit('abcdef'), 2, 4))would result to
abcin SparkR 2.3.0, and the result would be
bcdin SparkR 2.3.1.
Upgrading to SparkR 2.4.0
- Previously, we don’t check the validity of the size of the last layer in
spark.mlp. For example, if the training data only has two labels, a
c(1, 3)doesn’t cause an error previously, now it does.