Packages

package sql

Allows the execution of relational queries, including those expressed in SQL using Spark.

Source
package.scala
Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. sql
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. class AnalysisException extends Exception with Serializable

    Thrown when a query fails to analyze, usually because the query itself is invalid.

    Thrown when a query fails to analyze, usually because the query itself is invalid.

    Annotations
    @Stable()
    Since

    1.3.0

  2. class Column extends Logging

    A column that will be computed based on the data in a DataFrame.

    A column that will be computed based on the data in a DataFrame.

    A new column can be constructed based on the input columns present in a DataFrame:

    df("columnName")            // On a specific `df` DataFrame.
    col("columnName")           // A generic column not yet associated with a DataFrame.
    col("columnName.field")     // Extracting a struct field
    col("`a.column.with.dots`") // Escape `.` in column names.
    $"columnName"               // Scala short hand for a named column.

    Column objects can be composed to form complex expressions:

    $"a" + 1
    $"a" === $"b"
    Annotations
    @Stable()
    Since

    1.3.0

    Note

    The internal Catalyst expression can be accessed via expr, but this method is for debugging purposes only and can change in any future Spark releases.

  3. class ColumnName extends Column

    A convenient class used for constructing schema.

    A convenient class used for constructing schema.

    Annotations
    @Stable()
    Since

    1.3.0

  4. trait CreateTableWriter[T] extends WriteConfigMethods[CreateTableWriter[T]]

    Trait to restrict calls to create and replace operations.

    Trait to restrict calls to create and replace operations.

    Since

    3.0.0

  5. type DataFrame = Dataset[Row]
  6. final class DataFrameNaFunctions extends AnyRef

    Functionality for working with missing data in DataFrames.

    Functionality for working with missing data in DataFrames.

    Annotations
    @Stable()
    Since

    1.3.1

  7. class DataFrameReader extends Logging

    Interface used to load a Dataset from external storage systems (e.g.

    Interface used to load a Dataset from external storage systems (e.g. file systems, key-value stores, etc). Use SparkSession.read to access this.

    Annotations
    @Stable()
    Since

    1.4.0

  8. final class DataFrameStatFunctions extends AnyRef

    Statistic functions for DataFrames.

    Statistic functions for DataFrames.

    Annotations
    @Stable()
    Since

    1.4.0

  9. final class DataFrameWriter[T] extends AnyRef

    Interface used to write a Dataset to external storage systems (e.g.

    Interface used to write a Dataset to external storage systems (e.g. file systems, key-value stores, etc). Use Dataset.write to access this.

    Annotations
    @Stable()
    Since

    1.4.0

  10. final class DataFrameWriterV2[T] extends CreateTableWriter[T]

    Interface used to write a org.apache.spark.sql.Dataset to external storage using the v2 API.

    Interface used to write a org.apache.spark.sql.Dataset to external storage using the v2 API.

    Annotations
    @Experimental()
    Since

    3.0.0

  11. class Dataset[T] extends Serializable

    A Dataset is a strongly typed collection of domain-specific objects that can be transformed in parallel using functional or relational operations.

    A Dataset is a strongly typed collection of domain-specific objects that can be transformed in parallel using functional or relational operations. Each Dataset also has an untyped view called a DataFrame, which is a Dataset of Row.

    Operations available on Datasets are divided into transformations and actions. Transformations are the ones that produce new Datasets, and actions are the ones that trigger computation and return results. Example transformations include map, filter, select, and aggregate (groupBy). Example actions count, show, or writing data out to file systems.

    Datasets are "lazy", i.e. computations are only triggered when an action is invoked. Internally, a Dataset represents a logical plan that describes the computation required to produce the data. When an action is invoked, Spark's query optimizer optimizes the logical plan and generates a physical plan for efficient execution in a parallel and distributed manner. To explore the logical plan as well as optimized physical plan, use the explain function.

    To efficiently support domain-specific objects, an Encoder is required. The encoder maps the domain specific type T to Spark's internal type system. For example, given a class Person with two fields, name (string) and age (int), an encoder is used to tell Spark to generate code at runtime to serialize the Person object into a binary structure. This binary structure often has much lower memory footprint as well as are optimized for efficiency in data processing (e.g. in a columnar format). To understand the internal binary representation for data, use the schema function.

    There are typically two ways to create a Dataset. The most common way is by pointing Spark to some files on storage systems, using the read function available on a SparkSession.

    val people = spark.read.parquet("...").as[Person]  // Scala
    Dataset<Person> people = spark.read().parquet("...").as(Encoders.bean(Person.class)); // Java

    Datasets can also be created through transformations available on existing Datasets. For example, the following creates a new Dataset by applying a filter on the existing one:

    val names = people.map(_.name)  // in Scala; names is a Dataset[String]
    Dataset<String> names = people.map((Person p) -> p.name, Encoders.STRING));

    Dataset operations can also be untyped, through various domain-specific-language (DSL) functions defined in: Dataset (this class), Column, and functions. These operations are very similar to the operations available in the data frame abstraction in R or Python.

    To select a column from the Dataset, use apply method in Scala and col in Java.

    val ageCol = people("age")  // in Scala
    Column ageCol = people.col("age"); // in Java

    Note that the Column type can also be manipulated through its various functions.

    // The following creates a new column that increases everybody's age by 10.
    people("age") + 10  // in Scala
    people.col("age").plus(10);  // in Java

    A more concrete example in Scala:

    // To create Dataset[Row] using SparkSession
    val people = spark.read.parquet("...")
    val department = spark.read.parquet("...")
    
    people.filter("age > 30")
      .join(department, people("deptId") === department("id"))
      .groupBy(department("name"), people("gender"))
      .agg(avg(people("salary")), max(people("age")))

    and in Java:

    // To create Dataset<Row> using SparkSession
    Dataset<Row> people = spark.read().parquet("...");
    Dataset<Row> department = spark.read().parquet("...");
    
    people.filter(people.col("age").gt(30))
      .join(department, people.col("deptId").equalTo(department.col("id")))
      .groupBy(department.col("name"), people.col("gender"))
      .agg(avg(people.col("salary")), max(people.col("age")));
    Annotations
    @Stable()
    Since

    1.6.0

  12. case class DatasetHolder[T] extends Product with Serializable

    A container for a Dataset, used for implicit conversions in Scala.

    A container for a Dataset, used for implicit conversions in Scala.

    To use this, import implicit conversions in SQL:

    val spark: SparkSession = ...
    import spark.implicits._
    Annotations
    @Stable()
    Since

    1.6.0

  13. trait Encoder[T] extends Serializable

    Used to convert a JVM object of type T to and from the internal Spark SQL representation.

    Used to convert a JVM object of type T to and from the internal Spark SQL representation.

    Scala

    Encoders are generally created automatically through implicits from a SparkSession, or can be explicitly created by calling static methods on Encoders.

    import spark.implicits._
    
    val ds = Seq(1, 2, 3).toDS() // implicitly provided (spark.implicits.newIntEncoder)

    Java

    Encoders are specified by calling static methods on Encoders.

    List<String> data = Arrays.asList("abc", "abc", "xyz");
    Dataset<String> ds = context.createDataset(data, Encoders.STRING());

    Encoders can be composed into tuples:

    Encoder<Tuple2<Integer, String>> encoder2 = Encoders.tuple(Encoders.INT(), Encoders.STRING());
    List<Tuple2<Integer, String>> data2 = Arrays.asList(new scala.Tuple2(1, "a");
    Dataset<Tuple2<Integer, String>> ds2 = context.createDataset(data2, encoder2);

    Or constructed from Java Beans:

    Encoders.bean(MyClass.class);

    Implementation

    • Encoders should be thread-safe.
    Annotations
    @implicitNotFound( ... )
    Since

    1.6.0

  14. class ExperimentalMethods extends AnyRef

    :: Experimental :: Holder for experimental methods for the bravest.

    :: Experimental :: Holder for experimental methods for the bravest. We make NO guarantee about the stability regarding binary compatibility and source compatibility of methods here.

    spark.experimental.extraStrategies += ...
    Annotations
    @Experimental() @Unstable()
    Since

    1.3.0

  15. abstract class ForeachWriter[T] extends Serializable

    The abstract class for writing custom logic to process data generated by a query.

    The abstract class for writing custom logic to process data generated by a query. This is often used to write the output of a streaming query to arbitrary storage systems. Any implementation of this base class will be used by Spark in the following way.

    • A single instance of this class is responsible of all the data generated by a single task in a query. In other words, one instance is responsible for processing one partition of the data generated in a distributed manner.
    • Any implementation of this class must be serializable because each task will get a fresh serialized-deserialized copy of the provided object. Hence, it is strongly recommended that any initialization for writing data (e.g. opening a connection or starting a transaction) is done after the open(...) method has been called, which signifies that the task is ready to generate data.
    • The lifecycle of the methods are as follows.

      For each partition with `partitionId`:
          For each batch/epoch of streaming data (if its streaming query) with `epochId`:
              Method `open(partitionId, epochId)` is called.
              If `open` returns true:
                   For each row in the partition and batch/epoch, method `process(row)` is called.
              Method `close(errorOrNull)` is called with error (if any) seen while processing rows.
    

    Important points to note:

    • Spark doesn't guarantee same output for (partitionId, epochId), so deduplication cannot be achieved with (partitionId, epochId). e.g. source provides different number of partitions for some reason, Spark optimization changes number of partitions, etc. Refer SPARK-28650 for more details. If you need deduplication on output, try out foreachBatch instead.
    • The close() method will be called if open() method returns successfully (irrespective of the return value), except if the JVM crashes in the middle.

    Scala example:

    datasetOfString.writeStream.foreach(new ForeachWriter[String] {
    
      def open(partitionId: Long, version: Long): Boolean = {
        // open connection
      }
    
      def process(record: String) = {
        // write string to connection
      }
    
      def close(errorOrNull: Throwable): Unit = {
        // close the connection
      }
    })

    Java example:

    datasetOfString.writeStream().foreach(new ForeachWriter<String>() {
    
      @Override
      public boolean open(long partitionId, long version) {
        // open connection
      }
    
      @Override
      public void process(String value) {
        // write string to connection
      }
    
      @Override
      public void close(Throwable errorOrNull) {
        // close the connection
      }
    });
    Since

    2.0.0

  16. class KeyValueGroupedDataset[K, V] extends Serializable

    A Dataset has been logically grouped by a user specified grouping key.

    A Dataset has been logically grouped by a user specified grouping key. Users should not construct a KeyValueGroupedDataset directly, but should instead call groupByKey on an existing Dataset.

    Since

    2.0.0

  17. trait LowPrioritySQLImplicits extends AnyRef

    Lower priority implicit methods for converting Scala objects into Datasets.

    Lower priority implicit methods for converting Scala objects into Datasets. Conflicting implicits are placed here to disambiguate resolution.

    Reasons for including specific implicits: newProductEncoder - to disambiguate for Lists which are both Seq and Product

  18. class RelationalGroupedDataset extends AnyRef

    A set of methods for aggregations on a DataFrame, created by groupBy, cube or rollup (and also pivot).

    A set of methods for aggregations on a DataFrame, created by groupBy, cube or rollup (and also pivot).

    The main method is the agg function, which has multiple variants. This class also contains some first-order statistics such as mean, sum for convenience.

    Annotations
    @Stable()
    Since

    2.0.0

    Note

    This class was named GroupedData in Spark 1.x.

  19. trait Row extends Serializable

    Represents one row of output from a relational operator.

    Represents one row of output from a relational operator. Allows both generic access by ordinal, which will incur boxing overhead for primitives, as well as native primitive access.

    It is invalid to use the native primitive interface to retrieve a value that is null, instead a user must check isNullAt before attempting to retrieve a value that might be null.

    To create a new Row, use RowFactory.create() in Java or Row.apply() in Scala.

    A Row object can be constructed by providing field values. Example:

    import org.apache.spark.sql._
    
    // Create a Row from values.
    Row(value1, value2, value3, ...)
    // Create a Row from a Seq of values.
    Row.fromSeq(Seq(value1, value2, ...))

    A value of a row can be accessed through both generic access by ordinal, which will incur boxing overhead for primitives, as well as native primitive access. An example of generic access by ordinal:

    import org.apache.spark.sql._
    
    val row = Row(1, true, "a string", null)
    // row: Row = [1,true,a string,null]
    val firstValue = row(0)
    // firstValue: Any = 1
    val fourthValue = row(3)
    // fourthValue: Any = null

    For native primitive access, it is invalid to use the native primitive interface to retrieve a value that is null, instead a user must check isNullAt before attempting to retrieve a value that might be null. An example of native primitive access:

    // using the row from the previous example.
    val firstValue = row.getInt(0)
    // firstValue: Int = 1
    val isNull = row.isNullAt(3)
    // isNull: Boolean = true

    In Scala, fields in a Row object can be extracted in a pattern match. Example:

    import org.apache.spark.sql._
    
    val pairs = sql("SELECT key, value FROM src").rdd.map {
      case Row(key: Int, value: String) =>
        key -> value
    }
    Annotations
    @Stable()
    Since

    1.3.0

  20. class RowFactory extends AnyRef
  21. class RuntimeConfig extends AnyRef

    Runtime configuration interface for Spark.

    Runtime configuration interface for Spark. To access this, use SparkSession.conf.

    Options set here are automatically propagated to the Hadoop configuration during I/O.

    Annotations
    @Stable()
    Since

    2.0.0

  22. class SQLContext extends Logging with Serializable

    The entry point for working with structured data (rows and columns) in Spark 1.x.

    The entry point for working with structured data (rows and columns) in Spark 1.x.

    As of Spark 2.0, this is replaced by SparkSession. However, we are keeping the class here for backward compatibility.

    Annotations
    @Stable()
    Since

    1.0.0

  23. abstract class SQLImplicits extends LowPrioritySQLImplicits

    A collection of implicit methods for converting common Scala objects into Datasets.

    A collection of implicit methods for converting common Scala objects into Datasets.

    Since

    1.6.0

  24. sealed abstract final class SaveMode extends Enum[SaveMode]
  25. class SparkSession extends Serializable with Closeable with Logging

    The entry point to programming Spark with the Dataset and DataFrame API.

    The entry point to programming Spark with the Dataset and DataFrame API.

    In environments that this has been created upfront (e.g. REPL, notebooks), use the builder to get an existing session:

    SparkSession.builder().getOrCreate()

    The builder can also be used to create a new session:

    SparkSession.builder
      .master("local")
      .appName("Word Count")
      .config("spark.some.config.option", "some-value")
      .getOrCreate()
    Annotations
    @Stable()
  26. class SparkSessionExtensions extends AnyRef

    :: Experimental :: Holder for injection points to the SparkSession.

    :: Experimental :: Holder for injection points to the SparkSession. We make NO guarantee about the stability regarding binary compatibility and source compatibility of methods here.

    This current provides the following extension points:

    • Analyzer Rules.
    • Check Analysis Rules.
    • Optimizer Rules.
    • Planning Strategies.
    • Customized Parser.
    • (External) Catalog listeners.
    • Columnar Rules.
    • Adaptive Query Stage Preparation Rules.

    The extensions can be used by calling withExtensions on the SparkSession.Builder, for example:

    SparkSession.builder()
      .master("...")
      .config("...", true)
      .withExtensions { extensions =>
        extensions.injectResolutionRule { session =>
          ...
        }
        extensions.injectParser { (session, parser) =>
          ...
        }
      }
      .getOrCreate()

    The extensions can also be used by setting the Spark SQL configuration property spark.sql.extensions. Multiple extensions can be set using a comma-separated list. For example:

    SparkSession.builder()
      .master("...")
      .config("spark.sql.extensions", "org.example.MyExtensions")
      .getOrCreate()
    
    class MyExtensions extends Function1[SparkSessionExtensions, Unit] {
      override def apply(extensions: SparkSessionExtensions): Unit = {
        extensions.injectResolutionRule { session =>
          ...
        }
        extensions.injectParser { (session, parser) =>
          ...
        }
      }
    }

    Note that none of the injected builders should assume that the SparkSession is fully initialized and should not touch the session's internals (e.g. the SessionState).

    Annotations
    @DeveloperApi() @Experimental() @Unstable()
  27. type Strategy = SparkStrategy

    Converts a logical plan into zero or more SparkPlans.

    Converts a logical plan into zero or more SparkPlans. This API is exposed for experimenting with the query planner and is not designed to be stable across spark releases. Developers writing libraries should instead consider using the stable APIs provided in org.apache.spark.sql.sources

    Annotations
    @DeveloperApi() @Unstable()
  28. class TypedColumn[-T, U] extends Column

    A Column where an Encoder has been given for the expected input and return type.

    A Column where an Encoder has been given for the expected input and return type. To create a TypedColumn, use the as function on a Column.

    T

    The input type expected for this expression. Can be Any if the expression is type checked by the analyzer instead of the compiler (i.e. expr("sum(...)")).

    U

    The output type of this column.

    Annotations
    @Stable()
    Since

    1.6.0

  29. class UDFRegistration extends Logging

    Functions for registering user-defined functions.

    Functions for registering user-defined functions. Use SparkSession.udf to access this:

    spark.udf
    Annotations
    @Stable()
    Since

    1.3.0

  30. trait WriteConfigMethods[R] extends AnyRef

    Configuration methods common to create/replace operations and insert/overwrite operations.

    Configuration methods common to create/replace operations and insert/overwrite operations.

    R

    builder type to return

    Since

    3.0.0

Value Members

  1. object Encoders

    Methods for creating an Encoder.

    Methods for creating an Encoder.

    Since

    1.6.0

  2. object Row extends Serializable

    Annotations
    @Stable()
    Since

    1.3.0

  3. object SQLContext extends Serializable

    This SQLContext object contains utility functions to create a singleton SQLContext instance, or to get the created SQLContext instance.

    This SQLContext object contains utility functions to create a singleton SQLContext instance, or to get the created SQLContext instance.

    It also provides utility functions to support preference for threads in multiple sessions scenario, setActive could set a SQLContext for current thread, which will be returned by getOrCreate instead of the global one.

  4. object SparkSession extends Logging with Serializable
    Annotations
    @Stable()
  5. object functions

    Commonly used functions available for DataFrame operations.

    Commonly used functions available for DataFrame operations. Using functions defined here provides a little bit more compile-time safety to make sure the function exists.

    Spark also includes more built-in functions that are less common and are not defined here. You can still access them (and all the functions defined here) using the functions.expr() API and calling them through a SQL expression string. You can find the entire list of functions at SQL API documentation.

    As an example, isnan is a function that is defined here. You can use isnan(col("myCol")) to invoke the isnan function. This way the programming language's compiler ensures isnan exists and is of the proper form. You can also use expr("isnan(myCol)") function to invoke the same function. In this case, Spark itself will ensure isnan exists when it analyzes the query.

    regr_count is an example of a function that is built-in but not defined here, because it is less commonly used. To invoke it, use expr("regr_count(yCol, xCol)").

    This function APIs usually have methods with Column signature only because it can support not only Column but also other types such as a native string. The other variants currently exist for historical reasons.

    Annotations
    @Stable()
    Since

    1.3.0

Inherited from AnyRef

Inherited from Any

Ungrouped