The primary workflow for executing relational queries using Spark.
Caches the specified table in-memory.
:: Experimental ::
Creates an empty parquet file with the schema of class
A, which can be registered as a table.
A case class type that describes the desired schema of the parquet file to be created.
The path where the directory containing parquet metadata should be created. Data inserted into this table will also be stored at this location.
When false, an exception will be thrown if this directory already exists.
A Hadoop configuration object that can be used to specify options to the parquet output format.
Creates a SchemaRDD from an RDD of case classes.
:: DeveloperApi :: Allows catalyst LogicalPlans to be executed as a SchemaRDD.
Loads a Parquet file, returning the result as a SchemaRDD.
Prepares a planned SparkPlan for execution by binding references to specific ordinals, and inserting shuffle operations as needed.
Registers the given RDD as a temporary table in the catalog.
Executes a SQL query using Spark, returning the result as a SchemaRDD.
Returns the specified table as a SchemaRDD
Removes the specified table from the in-memory cache.