Thrown when a query fails to analyze, usually because the query itself is invalid.
A column that will be computed based on the data in a
The internal Catalyst expression can be accessed via expr, but this method is for debugging purposes only and can change in any future Spark releases.
A convenient class used for constructing schema.
Functionality for working with missing data in
Interface used to load a Dataset from external storage systems (e.g.
Statistic functions for
Interface used to write a Dataset to external storage systems (e.g.
A Dataset is a strongly typed collection of domain-specific objects that can be transformed in parallel using functional or relational operations.
A container for a Dataset, used for implicit conversions in Scala.
:: Experimental ::
Used to convert a JVM object of type
T to and from the internal Spark SQL representation.
:: Experimental :: Holder for experimental methods for the bravest.
A class to consume data generated by a
:: Experimental :: A Dataset has been logically grouped by a user specified grouping key.
Lower priority implicit methods for converting Scala objects into Datasets.
A set of methods for aggregations on a
DataFrame, created by
Represents one row of output from a relational operator.
Runtime configuration interface for Spark.
The entry point for working with structured data (rows and columns) in Spark 1.x.
A collection of implicit methods for converting common Scala objects into Datasets.
The entry point to programming Spark with the Dataset and DataFrame API.
:: Experimental :: Holder for injection points to the SparkSession.
Converts a logical plan into zero or more SparkPlans.
The input type expected for this expression. Can be
Any if the expression is type
checked by the analyzer instead of the compiler (i.e.
The output type of this column.
Functions for registering user-defined functions.
The user-defined functions must be deterministic.
:: Experimental :: Methods for creating an Encoder.
This SQLContext object contains utility functions to create a singleton SQLContext instance, or to get the created SQLContext instance.
Contains API classes that are specific to a single language (i.e.
Functions available for DataFrame operations.
Support for running Spark SQL queries using functionality from Apache Hive (does not require an existing Hive installation).
A set of APIs for adding data sources to Spark SQL.
Contains a type system for attributes produced by relations, including complex types like structs, arrays and maps.