Interface SupportsDelta
- All Superinterfaces:
RowLevelOperation
A mix-in interface for
RowLevelOperation
. Data sources can implement this interface
to indicate they support handling deltas of rows.- Since:
- 3.4.0
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.sql.connector.write.RowLevelOperation
RowLevelOperation.Command
-
Method Summary
Modifier and TypeMethodDescriptionReturns aWriteBuilder
to configure aWrite
for this row-level operation.default boolean
Controls whether to represent updates as deletes and inserts.rowId()
Returns the row ID column references that should be used for row equality.Methods inherited from interface org.apache.spark.sql.connector.write.RowLevelOperation
command, description, newScanBuilder, requiredMetadataAttributes
-
Method Details
-
newWriteBuilder
Description copied from interface:RowLevelOperation
Returns aWriteBuilder
to configure aWrite
for this row-level operation.Note that Spark will first configure the scan and then the write, allowing data sources to pass information from the scan to the write. For example, the scan can report which condition was used to read the data that may be needed by the write under certain isolation levels. Implementations may capture the built scan or required scan information and then use it while building the write.
- Specified by:
newWriteBuilder
in interfaceRowLevelOperation
-
rowId
NamedReference[] rowId()Returns the row ID column references that should be used for row equality. -
representUpdateAsDeleteAndInsert
default boolean representUpdateAsDeleteAndInsert()Controls whether to represent updates as deletes and inserts.Data sources may choose to split updates into deletes and inserts to either better cluster and order the incoming delta of rows or to simplify the write process.
-