package sql
Allows the execution of relational queries, including those expressed in SQL using Spark.
- Source
- package.scala
- Alphabetic
- By Inheritance
- sql
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Package Members
- package api
Contains API classes that are specific to a single language (i.e.
Contains API classes that are specific to a single language (i.e. Java).
- package artifact
- package avro
- package catalog
- package catalyst
- package columnar
- package connector
- package exceptions
- package expressions
- package jdbc
- package ml
- package scripting
- package sources
A set of APIs for adding data sources to Spark SQL.
- package streaming
- package types
Contains a type system for attributes produced by relations, including complex types like structs, arrays and maps.
- package util
- package vectorized
Type Members
- class AnalysisException extends Exception with SparkThrowable with Serializable with WithOrigin
Thrown when a query fails to analyze, usually because the query itself is invalid.
Thrown when a query fails to analyze, usually because the query itself is invalid.
- Annotations
- @Stable()
- Since
1.3.0
- class Column extends Logging
A column that will be computed based on the data in a
DataFrame
.A column that will be computed based on the data in a
DataFrame
.A new column can be constructed based on the input columns present in a DataFrame:
df("columnName") // On a specific `df` DataFrame. col("columnName") // A generic column not yet associated with a DataFrame. col("columnName.field") // Extracting a struct field col("`a.column.with.dots`") // Escape `.` in column names. $"columnName" // Scala short hand for a named column.
Column objects can be composed to form complex expressions:
$"a" + 1 $"a" === $"b"
- Annotations
- @Stable()
- Since
1.3.0
- class ColumnName extends Column
A convenient class used for constructing schema.
A convenient class used for constructing schema.
- Annotations
- @Stable()
- Since
1.3.0
- trait CreateTableWriter[T] extends WriteConfigMethods[CreateTableWriter[T]]
Trait to restrict calls to create and replace operations.
Trait to restrict calls to create and replace operations.
- Since
3.0.0
- type DataFrame = Dataset[Row]
- final class DataFrameNaFunctions extends sql.api.DataFrameNaFunctions[Dataset]
Functionality for working with missing data in
DataFrame
s.Functionality for working with missing data in
DataFrame
s.- Annotations
- @Stable()
- Since
1.3.1
- class DataFrameReader extends sql.api.DataFrameReader[Dataset]
Interface used to load a Dataset from external storage systems (e.g.
Interface used to load a Dataset from external storage systems (e.g. file systems, key-value stores, etc). Use
SparkSession.read
to access this.- Annotations
- @Stable()
- Since
1.4.0
- final class DataFrameStatFunctions extends sql.api.DataFrameStatFunctions[Dataset]
Statistic functions for
DataFrame
s.Statistic functions for
DataFrame
s.- Annotations
- @Stable()
- Since
1.4.0
- abstract class DataFrameWriter[T] extends AnyRef
Interface used to write a org.apache.spark.sql.api.Dataset to external storage systems (e.g.
Interface used to write a org.apache.spark.sql.api.Dataset to external storage systems (e.g. file systems, key-value stores, etc). Use
Dataset.write
to access this.- Annotations
- @Stable()
- Since
1.4.0
- abstract class DataFrameWriterV2[T] extends CreateTableWriter[T]
Interface used to write a org.apache.spark.sql.api.Dataset to external storage using the v2 API.
Interface used to write a org.apache.spark.sql.api.Dataset to external storage using the v2 API.
- Annotations
- @Experimental()
- Since
3.0.0
- class DataSourceRegistration extends Logging
Functions for registering user-defined data sources.
Functions for registering user-defined data sources. Use
SparkSession.dataSource
to access this.- Annotations
- @Evolving()
- class Dataset[T] extends sql.api.Dataset[T, Dataset]
A Dataset is a strongly typed collection of domain-specific objects that can be transformed in parallel using functional or relational operations.
A Dataset is a strongly typed collection of domain-specific objects that can be transformed in parallel using functional or relational operations. Each Dataset also has an untyped view called a
DataFrame
, which is a Dataset of Row.Operations available on Datasets are divided into transformations and actions. Transformations are the ones that produce new Datasets, and actions are the ones that trigger computation and return results. Example transformations include map, filter, select, and aggregate (
groupBy
). Example actions count, show, or writing data out to file systems.Datasets are "lazy", i.e. computations are only triggered when an action is invoked. Internally, a Dataset represents a logical plan that describes the computation required to produce the data. When an action is invoked, Spark's query optimizer optimizes the logical plan and generates a physical plan for efficient execution in a parallel and distributed manner. To explore the logical plan as well as optimized physical plan, use the
explain
function.To efficiently support domain-specific objects, an Encoder is required. The encoder maps the domain specific type
T
to Spark's internal type system. For example, given a classPerson
with two fields,name
(string) andage
(int), an encoder is used to tell Spark to generate code at runtime to serialize thePerson
object into a binary structure. This binary structure often has much lower memory footprint as well as are optimized for efficiency in data processing (e.g. in a columnar format). To understand the internal binary representation for data, use theschema
function.There are typically two ways to create a Dataset. The most common way is by pointing Spark to some files on storage systems, using the
read
function available on aSparkSession
.val people = spark.read.parquet("...").as[Person] // Scala Dataset<Person> people = spark.read().parquet("...").as(Encoders.bean(Person.class)); // Java
Datasets can also be created through transformations available on existing Datasets. For example, the following creates a new Dataset by applying a filter on the existing one:
val names = people.map(_.name) // in Scala; names is a Dataset[String] Dataset<String> names = people.map( (MapFunction<Person, String>) p -> p.name, Encoders.STRING()); // Java
Dataset operations can also be untyped, through various domain-specific-language (DSL) functions defined in: Dataset (this class), Column, and functions. These operations are very similar to the operations available in the data frame abstraction in R or Python.
To select a column from the Dataset, use
apply
method in Scala andcol
in Java.val ageCol = people("age") // in Scala Column ageCol = people.col("age"); // in Java
Note that the Column type can also be manipulated through its various functions.
// The following creates a new column that increases everybody's age by 10. people("age") + 10 // in Scala people.col("age").plus(10); // in Java
A more concrete example in Scala:
// To create Dataset[Row] using SparkSession val people = spark.read.parquet("...") val department = spark.read.parquet("...") people.filter("age > 30") .join(department, people("deptId") === department("id")) .groupBy(department("name"), people("gender")) .agg(avg(people("salary")), max(people("age")))
and in Java:
// To create Dataset<Row> using SparkSession Dataset<Row> people = spark.read().parquet("..."); Dataset<Row> department = spark.read().parquet("..."); people.filter(people.col("age").gt(30)) .join(department, people.col("deptId").equalTo(department.col("id"))) .groupBy(department.col("name"), people.col("gender")) .agg(avg(people.col("salary")), max(people.col("age")));
- Annotations
- @Stable()
- Since
1.6.0
- case class DatasetHolder[T] extends Product with Serializable
A container for a Dataset, used for implicit conversions in Scala.
A container for a Dataset, used for implicit conversions in Scala.
To use this, import implicit conversions in SQL:
val spark: SparkSession = ... import spark.implicits._
- Annotations
- @Stable()
- Since
1.6.0
- trait Encoder[T] extends Serializable
Used to convert a JVM object of type
T
to and from the internal Spark SQL representation.Used to convert a JVM object of type
T
to and from the internal Spark SQL representation.Scala
Encoders are generally created automatically through implicits from a
SparkSession
, or can be explicitly created by calling static methods on Encoders.import spark.implicits._ val ds = Seq(1, 2, 3).toDS() // implicitly provided (spark.implicits.newIntEncoder)
Java
Encoders are specified by calling static methods on Encoders.
List<String> data = Arrays.asList("abc", "abc", "xyz"); Dataset<String> ds = context.createDataset(data, Encoders.STRING());
Encoders can be composed into tuples:
Encoder<Tuple2<Integer, String>> encoder2 = Encoders.tuple(Encoders.INT(), Encoders.STRING()); List<Tuple2<Integer, String>> data2 = Arrays.asList(new scala.Tuple2(1, "a"); Dataset<Tuple2<Integer, String>> ds2 = context.createDataset(data2, encoder2);
Or constructed from Java Beans:
Encoders.bean(MyClass.class);
Implementation
- Encoders should be thread-safe.
- Annotations
- @implicitNotFound()
- Since
1.6.0
- class ExperimentalMethods extends AnyRef
:: Experimental :: Holder for experimental methods for the bravest.
:: Experimental :: Holder for experimental methods for the bravest. We make NO guarantee about the stability regarding binary compatibility and source compatibility of methods here.
spark.experimental.extraStrategies += ...
- Annotations
- @Experimental() @Unstable()
- Since
1.3.0
- trait ExtendedExplainGenerator extends AnyRef
A trait for a session extension to implement that provides addition explain plan information.
A trait for a session extension to implement that provides addition explain plan information.
- Annotations
- @DeveloperApi() @Since("4.0.0")
- abstract class ForeachWriter[T] extends Serializable
The abstract class for writing custom logic to process data generated by a query.
The abstract class for writing custom logic to process data generated by a query. This is often used to write the output of a streaming query to arbitrary storage systems. Any implementation of this base class will be used by Spark in the following way.
- A single instance of this class is responsible of all the data generated by a single task in a query. In other words, one instance is responsible for processing one partition of the data generated in a distributed manner.
- Any implementation of this class must be serializable because each task will get a fresh
serialized-deserialized copy of the provided object. Hence, it is strongly recommended that any
initialization for writing data (e.g. opening a connection or starting a transaction) is done
after the
open(...)
method has been called, which signifies that the task is ready to generate data. - The lifecycle of the methods are as follows.
For each partition with `partitionId`: For each batch/epoch of streaming data (if its streaming query) with `epochId`: Method `open(partitionId, epochId)` is called. If `open` returns true: For each row in the partition and batch/epoch, method `process(row)` is called. Method `close(errorOrNull)` is called with error (if any) seen while processing rows.
Important points to note:
- Spark doesn't guarantee same output for (partitionId,
epochId), so deduplication cannot be achieved with (partitionId, epochId). e.g. source provides
different number of partitions for some reason, Spark optimization changes number of
partitions, etc. Refer SPARK-28650 for more details. If you need deduplication on output, try
out
foreachBatch
instead. - The
close()
method will be called ifopen()
method returns successfully (irrespective of the return value), except if the JVM crashes in the middle.
Scala example:
datasetOfString.writeStream.foreach(new ForeachWriter[String] { def open(partitionId: Long, version: Long): Boolean = { // open connection } def process(record: String) = { // write string to connection } def close(errorOrNull: Throwable): Unit = { // close the connection } })
Java example:
datasetOfString.writeStream().foreach(new ForeachWriter<String>() { @Override public boolean open(long partitionId, long version) { // open connection } @Override public void process(String value) { // write string to connection } @Override public void close(Throwable errorOrNull) { // close the connection } });
- Since
2.0.0
- class KeyValueGroupedDataset[K, V] extends sql.api.KeyValueGroupedDataset[K, V, Dataset]
A Dataset has been logically grouped by a user specified grouping key.
A Dataset has been logically grouped by a user specified grouping key. Users should not construct a KeyValueGroupedDataset directly, but should instead call
groupByKey
on an existing Dataset.- Since
2.0.0
- trait LowPrioritySQLImplicits extends AnyRef
Lower priority implicit methods for converting Scala objects into Datasets.
Lower priority implicit methods for converting Scala objects into Datasets. Conflicting implicits are placed here to disambiguate resolution.
Reasons for including specific implicits: newProductEncoder - to disambiguate for
List
s which are bothSeq
andProduct
- abstract class MergeIntoWriter[T] extends AnyRef
MergeIntoWriter
provides methods to define and execute merge actions based on specified conditions.MergeIntoWriter
provides methods to define and execute merge actions based on specified conditions.Please note that schema evolution is disabled by default.
- T
the type of data in the Dataset.
- Annotations
- @Experimental()
- Since
4.0.0
- class Observation extends AnyRef
Helper class to simplify usage of
Dataset.observe(String, Column, Column*)
:Helper class to simplify usage of
Dataset.observe(String, Column, Column*)
:// Observe row count (rows) and highest id (maxid) in the Dataset while writing it val observation = Observation("my metrics") val observed_ds = ds.observe(observation, count(lit(1)).as("rows"), max($"id").as("maxid")) observed_ds.write.parquet("ds.parquet") val metrics = observation.get
This collects the metrics while the first action is executed on the observed dataset. Subsequent actions do not modify the metrics returned by get. Retrieval of the metric via get blocks until the first action has finished and metrics become available.
This class does not support streaming datasets.
- Since
3.3.0
- class RelationalGroupedDataset extends sql.api.RelationalGroupedDataset[Dataset]
A set of methods for aggregations on a
DataFrame
, created by groupBy, cube or rollup (and alsopivot
).A set of methods for aggregations on a
DataFrame
, created by groupBy, cube or rollup (and alsopivot
).The main method is the
agg
function, which has multiple variants. This class also contains some first-order statistics such asmean
,sum
for convenience.- Annotations
- @Stable()
- Since
2.0.0
- Note
This class was named
GroupedData
in Spark 1.x.
- trait Row extends Serializable
Represents one row of output from a relational operator.
Represents one row of output from a relational operator. Allows both generic access by ordinal, which will incur boxing overhead for primitives, as well as native primitive access.
It is invalid to use the native primitive interface to retrieve a value that is null, instead a user must check
isNullAt
before attempting to retrieve a value that might be null.To create a new Row, use
RowFactory.create()
in Java orRow.apply()
in Scala.A Row object can be constructed by providing field values. Example:
import org.apache.spark.sql._ // Create a Row from values. Row(value1, value2, value3, ...) // Create a Row from a Seq of values. Row.fromSeq(Seq(value1, value2, ...))
A value of a row can be accessed through both generic access by ordinal, which will incur boxing overhead for primitives, as well as native primitive access. An example of generic access by ordinal:
import org.apache.spark.sql._ val row = Row(1, true, "a string", null) // row: Row = [1,true,a string,null] val firstValue = row(0) // firstValue: Any = 1 val fourthValue = row(3) // fourthValue: Any = null
For native primitive access, it is invalid to use the native primitive interface to retrieve a value that is null, instead a user must check
isNullAt
before attempting to retrieve a value that might be null. An example of native primitive access:// using the row from the previous example. val firstValue = row.getInt(0) // firstValue: Int = 1 val isNull = row.isNullAt(3) // isNull: Boolean = true
In Scala, fields in a Row object can be extracted in a pattern match. Example:
import org.apache.spark.sql._ val pairs = sql("SELECT key, value FROM src").rdd.map { case Row(key: Int, value: String) => key -> value }
- Annotations
- @Stable()
- Since
1.3.0
- class RowFactory extends AnyRef
A factory class used to construct
Row
objects.A factory class used to construct
Row
objects.- Annotations
- @Stable()
- Since
1.3.0
- class RuntimeConfig extends AnyRef
Runtime configuration interface for Spark.
Runtime configuration interface for Spark. To access this, use
SparkSession.conf
.Options set here are automatically propagated to the Hadoop configuration during I/O.
- Annotations
- @Stable()
- Since
2.0.0
- class SQLContext extends Logging with Serializable
The entry point for working with structured data (rows and columns) in Spark 1.x.
The entry point for working with structured data (rows and columns) in Spark 1.x.
As of Spark 2.0, this is replaced by SparkSession. However, we are keeping the class here for backward compatibility.
- Annotations
- @Stable()
- Since
1.0.0
- abstract class SQLImplicits extends LowPrioritySQLImplicits
A collection of implicit methods for converting common Scala objects into Datasets.
A collection of implicit methods for converting common Scala objects into Datasets.
- Since
1.6.0
- sealed final class SaveMode extends Enum[SaveMode]
SaveMode is used to specify the expected behavior of saving a DataFrame to a data source.
SaveMode is used to specify the expected behavior of saving a DataFrame to a data source.
- Annotations
- @Stable()
- Since
1.3.0
- class SparkSession extends sql.api.SparkSession[Dataset] with Logging
The entry point to programming Spark with the Dataset and DataFrame API.
The entry point to programming Spark with the Dataset and DataFrame API.
In environments that this has been created upfront (e.g. REPL, notebooks), use the builder to get an existing session:
SparkSession.builder().getOrCreate()
The builder can also be used to create a new session:
SparkSession.builder .master("local") .appName("Word Count") .config("spark.some.config.option", "some-value") .getOrCreate()
- Annotations
- @Stable()
- class SparkSessionExtensions extends AnyRef
:: Experimental :: Holder for injection points to the SparkSession.
:: Experimental :: Holder for injection points to the SparkSession. We make NO guarantee about the stability regarding binary compatibility and source compatibility of methods here.
This current provides the following extension points:
- Analyzer Rules.
- Check Analysis Rules.
- Cache Plan Normalization Rules.
- Optimizer Rules.
- Pre CBO Rules.
- Planning Strategies.
- Customized Parser.
- (External) Catalog listeners.
- Columnar Rules.
- Adaptive Query Post Planner Strategy Rules.
- Adaptive Query Stage Preparation Rules.
- Adaptive Query Execution Runtime Optimizer Rules.
- Adaptive Query Stage Optimizer Rules.
The extensions can be used by calling
withExtensions
on the SparkSession.Builder, for example:SparkSession.builder() .master("...") .config("...", true) .withExtensions { extensions => extensions.injectResolutionRule { session => ... } extensions.injectParser { (session, parser) => ... } } .getOrCreate()
The extensions can also be used by setting the Spark SQL configuration property
spark.sql.extensions
. Multiple extensions can be set using a comma-separated list. For example:SparkSession.builder() .master("...") .config("spark.sql.extensions", "org.example.MyExtensions,org.example.YourExtensions") .getOrCreate() class MyExtensions extends Function1[SparkSessionExtensions, Unit] { override def apply(extensions: SparkSessionExtensions): Unit = { extensions.injectResolutionRule { session => ... } extensions.injectParser { (session, parser) => ... } } } class YourExtensions extends SparkSessionExtensionsProvider { override def apply(extensions: SparkSessionExtensions): Unit = { extensions.injectResolutionRule { session => ... } extensions.injectFunction(...) } }
Note that none of the injected builders should assume that the SparkSession is fully initialized and should not touch the session's internals (e.g. the SessionState).
- Annotations
- @DeveloperApi() @Experimental() @Unstable()
- trait SparkSessionExtensionsProvider extends (SparkSessionExtensions) => Unit
:: Unstable ::
:: Unstable ::
Base trait for implementations used by SparkSessionExtensions
For example, now we have an external function named
Age
to register as an extension for SparkSession:package org.apache.spark.examples.extensions import org.apache.spark.sql.catalyst.expressions.{CurrentDate, Expression, RuntimeReplaceable, SubtractDates} case class Age(birthday: Expression, child: Expression) extends RuntimeReplaceable { def this(birthday: Expression) = this(birthday, SubtractDates(CurrentDate(), birthday)) override def exprsReplaced: Seq[Expression] = Seq(birthday) override protected def withNewChildInternal(newChild: Expression): Expression = copy(newChild) }
We need to create our extension which inherits SparkSessionExtensionsProvider Example:
package org.apache.spark.examples.extensions import org.apache.spark.sql.{SparkSessionExtensions, SparkSessionExtensionsProvider} import org.apache.spark.sql.catalyst.FunctionIdentifier import org.apache.spark.sql.catalyst.expressions.{Expression, ExpressionInfo} class MyExtensions extends SparkSessionExtensionsProvider { override def apply(v1: SparkSessionExtensions): Unit = { v1.injectFunction( (new FunctionIdentifier("age"), new ExpressionInfo(classOf[Age].getName, "age"), (children: Seq[Expression]) => new Age(children.head))) } }
Then, we can inject
MyExtensions
in three ways,- withExtensions of SparkSession.Builder
- Config - spark.sql.extensions
- java.util.ServiceLoader - Add to src/main/resources/META-INF/services/org.apache.spark.sql.SparkSessionExtensionsProvider
- Annotations
- @DeveloperApi() @Unstable() @Since("3.2.0")
- Since
3.2.0
- See also
- type Strategy = SparkStrategy
Converts a logical plan into zero or more SparkPlans.
Converts a logical plan into zero or more SparkPlans. This API is exposed for experimenting with the query planner and is not designed to be stable across spark releases. Developers writing libraries should instead consider using the stable APIs provided in org.apache.spark.sql.sources
- Annotations
- @DeveloperApi() @Unstable()
- class TypedColumn[-T, U] extends Column
A Column where an Encoder has been given for the expected input and return type.
A Column where an Encoder has been given for the expected input and return type. To create a TypedColumn, use the
as
function on a Column.- T
The input type expected for this expression. Can be
Any
if the expression is type checked by the analyzer instead of the compiler (i.e.expr("sum(...)")
).- U
The output type of this column.
- Annotations
- @Stable()
- Since
1.6.0
- class UDFRegistration extends sql.api.UDFRegistration with Logging
Functions for registering user-defined functions.
Functions for registering user-defined functions. Use
SparkSession.udf
to access this:spark.udf
- Annotations
- @Stable()
- Since
1.3.0
- class UDTFRegistration extends Logging
Functions for registering user-defined table functions.
Functions for registering user-defined table functions. Use
SparkSession.udtf
to access this.- Annotations
- @Evolving()
- Since
3.5.0
- case class WhenMatched[T] extends Product with Serializable
A class for defining actions to be taken when matching rows in a DataFrame during a merge operation.
A class for defining actions to be taken when matching rows in a DataFrame during a merge operation.
- T
The type of data in the MergeIntoWriter.
- case class WhenNotMatched[T] extends Product with Serializable
A class for defining actions to be taken when no matching rows are found in a DataFrame during a merge operation.
A class for defining actions to be taken when no matching rows are found in a DataFrame during a merge operation.
- T
The type of data in the MergeIntoWriter.
- case class WhenNotMatchedBySource[T] extends Product with Serializable
A class for defining actions to be performed when there is no match by source during a merge operation in a MergeIntoWriter.
A class for defining actions to be performed when there is no match by source during a merge operation in a MergeIntoWriter.
- T
the type parameter for the MergeIntoWriter.
- trait WriteConfigMethods[R] extends AnyRef
Configuration methods common to create/replace operations and insert/overwrite operations.
Configuration methods common to create/replace operations and insert/overwrite operations.
- R
builder type to return
- Since
3.0.0
Value Members
- object Encoders
Methods for creating an Encoder.
Methods for creating an Encoder.
- Since
1.6.0
- object Observation
(Scala-specific) Create instances of Observation via Scala
apply
.(Scala-specific) Create instances of Observation via Scala
apply
.- Since
3.3.0
- object Row extends Serializable
- Annotations
- @Stable()
- Since
1.3.0
- object SQLContext extends Serializable
This SQLContext object contains utility functions to create a singleton SQLContext instance, or to get the created SQLContext instance.
This SQLContext object contains utility functions to create a singleton SQLContext instance, or to get the created SQLContext instance.
It also provides utility functions to support preference for threads in multiple sessions scenario, setActive could set a SQLContext for current thread, which will be returned by getOrCreate instead of the global one.
- object SparkSession extends Logging with Serializable
- Annotations
- @Stable()
- object functions
Commonly used functions available for DataFrame operations.
Commonly used functions available for DataFrame operations. Using functions defined here provides a little bit more compile-time safety to make sure the function exists.
You can call the functions defined here by two ways:
_FUNC_(...)
andfunctions.expr("_FUNC_(...)")
.As an example,
regr_count
is a function that is defined here. You can useregr_count(col("yCol", col("xCol")))
to invoke theregr_count
function. This way the programming language's compiler ensuresregr_count
exists and is of the proper form. You can also useexpr("regr_count(yCol, xCol)")
function to invoke the same function. In this case, Spark itself will ensureregr_count
exists when it analyzes the query.You can find the entire list of functions at SQL API documentation of your Spark version, see also the latest list
This function APIs usually have methods with
Column
signature only because it can support not onlyColumn
but also other types such as a native string. The other variants currently exist for historical reasons.- Annotations
- @Stable()
- Since
1.3.0