:: DeveloperApi :: Thrown when a query fails to analyze, usually because the query itself is invalid.
A column that will be computed based on the data in a DataFrame.
:: Experimental :: A convenient class used for constructing schema.
:: Experimental :: Functionality for working with missing data in DataFrames.
Interface used to load a Dataset from external storage systems (e.g.
:: Experimental :: Statistic functions for DataFrames.
Interface used to write a Dataset to external storage systems (e.g.
A Dataset is a strongly typed collection of domain-specific objects that can be transformed in parallel using functional or relational operations.
A container for a Dataset, used for implicit conversions in Scala.
:: Experimental ::
Used to convert a JVM object of type
T to and from the internal Spark SQL representation.
:: Experimental :: Holder for experimental methods for the bravest.
:: Experimental :: A class to consume data generated by a StreamingQuery.
:: Experimental :: A Dataset has been logically grouped by a user specified grouping key.
Represents one row of output from a relational operator.
Runtime configuration interface for Spark.
The entry point for working with structured data (rows and columns) in Spark 1.x.
A collection of implicit methods for converting common Scala objects into Datasets.
The entry point to programming Spark with the Dataset and DataFrame API.
Converts a logical plan into zero or more SparkPlans.
The input type expected for this expression. Can be
Any if the expression is type
checked by the analyzer instead of the compiler (i.e.
The output type of this column.
Functions for registering user-defined functions.
:: Experimental :: Methods for creating an Encoder.
This SQLContext object contains utility functions to create a singleton SQLContext instance, or to get the created SQLContext instance.
Contains API classes that are specific to a single language (i.e.
:: Experimental :: Functions available for DataFrame.
Support for running Spark SQL queries using functionality from Apache Hive (does not require an existing Hive installation).
All classes in this package are considered an internal API to Spark and are subject to change between minor releases.
A set of APIs for adding data sources to Spark SQL.
Contains a type system for attributes produced by relations, including complex types like structs, arrays and maps.