Interface StagedTable
- All Superinterfaces:
Table
This is used to implement atomic CREATE TABLE AS SELECT and REPLACE TABLE AS SELECT queries. The
planner will create one of these via
StagingTableCatalog.stageCreate(Identifier, StructType, Transform[], Map)
or
StagingTableCatalog.stageReplace(Identifier, StructType, Transform[], Map)
to prepare the
table for being written to. This table should usually implement SupportsWrite
. A new
writer will be constructed via SupportsWrite.newWriteBuilder(LogicalWriteInfo)
, and the
write will be committed. The job concludes with a call to commitStagedChanges()
, at
which point implementations are expected to commit the table's metadata into the metastore along
with the data that was written by the writes from the write builder this table created.
- Since:
- 3.0.0
-
Method Summary
Modifier and TypeMethodDescriptionvoid
Abort the changes that were staged, both in metadata and from temporary outputs of this table's writers.void
Finalize the creation or replacement of this table.default CustomTaskMetric[]
Retrieve driver metrics after a commit.Methods inherited from interface org.apache.spark.sql.connector.catalog.Table
capabilities, columns, constraints, currentVersion, name, partitioning, properties, schema
-
Method Details
-
commitStagedChanges
void commitStagedChanges()Finalize the creation or replacement of this table. -
abortStagedChanges
void abortStagedChanges()Abort the changes that were staged, both in metadata and from temporary outputs of this table's writers. -
reportDriverMetrics
Retrieve driver metrics after a commit. This is analogous toWrite.reportDriverMetrics()
. Note that these metrics must be included in the supported custom metrics reported by `supportedCustomMetrics` of theStagingTableCatalog
that returned the staged table.- Returns:
- an Array of commit metric values. Throws if the table has not been committed yet.
- Throws:
RuntimeException
-