public final class DataFrameWriterV2<T> extends Object implements CreateTableWriter<T>
Dataset to external storage using the v2 API.
 | Modifier and Type | Method and Description | 
|---|---|
| void | append()Append the contents of the data frame to the output table. | 
| void | create()Create a new table from the contents of the data frame. | 
| void | createOrReplace()Create a new table or replace an existing table with the contents of the data frame. | 
| DataFrameWriterV2<T> | option(String key,
      String value)Add a write option. | 
| DataFrameWriterV2<T> | options(scala.collection.Map<String,String> options)Add write options from a Scala Map. | 
| DataFrameWriterV2<T> | options(java.util.Map<String,String> options)Add write options from a Java Map. | 
| void | overwrite(Column condition)Overwrite rows matching the given filter condition with the contents of the data frame in
 the output table. | 
| void | overwritePartitions()Overwrite all partition for which the data frame contains at least one row with the contents
 of the data frame in the output table. | 
| CreateTableWriter<T> | partitionedBy(Column column,
             Column... columns) | 
| CreateTableWriter<T> | partitionedBy(Column column,
             scala.collection.Seq<Column> columns)Partition the output table created by  create,createOrReplace, orreplaceusing
 the given columns or transforms. | 
| void | replace()Replace an existing table with the contents of the data frame. | 
| CreateTableWriter<T> | tableProperty(String property,
             String value)Add a table property. | 
| CreateTableWriter<T> | using(String provider)Specifies a provider for the underlying output data source. | 
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitoption, option, optionpublic void append()
            throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException
 If the output table does not exist, this operation will fail with
 NoSuchTableException. The data frame will be
 validated to ensure it is compatible with the existing table.
 
org.apache.spark.sql.catalyst.analysis.NoSuchTableException - If the table does not existpublic void create()
CreateTableWriterThe new table's schema, partition layout, properties, and other configuration will be based on the configuration set on this writer.
 If the output table exists, this operation will fail with
 TableAlreadyExistsException.
 
create in interface CreateTableWriter<T>public void createOrReplace()
CreateTableWriterThe output table's schema, partition layout, properties, and other configuration will be based on the contents of the data frame and the configuration set on this writer. If the table exists, its configuration and data will be replaced.
createOrReplace in interface CreateTableWriter<T>public DataFrameWriterV2<T> option(String key, String value)
WriteConfigMethodsoption in interface WriteConfigMethods<CreateTableWriter<T>>key - (undocumented)value - (undocumented)public DataFrameWriterV2<T> options(scala.collection.Map<String,String> options)
WriteConfigMethodsoptions in interface WriteConfigMethods<CreateTableWriter<T>>options - (undocumented)public DataFrameWriterV2<T> options(java.util.Map<String,String> options)
WriteConfigMethodsoptions in interface WriteConfigMethods<CreateTableWriter<T>>options - (undocumented)public void overwrite(Column condition) throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException
 If the output table does not exist, this operation will fail with
 NoSuchTableException.
 The data frame will be validated to ensure it is compatible with the existing table.
 
condition - (undocumented)org.apache.spark.sql.catalyst.analysis.NoSuchTableException - If the table does not existpublic void overwritePartitions()
                         throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException
 This operation is equivalent to Hive's INSERT OVERWRITE ... PARTITION, which replaces
 partitions dynamically depending on the contents of the data frame.
 
 If the output table does not exist, this operation will fail with
 NoSuchTableException. The data frame will be
 validated to ensure it is compatible with the existing table.
 
org.apache.spark.sql.catalyst.analysis.NoSuchTableException - If the table does not existpublic CreateTableWriter<T> partitionedBy(Column column, Column... columns)
public CreateTableWriter<T> partitionedBy(Column column, scala.collection.Seq<Column> columns)
CreateTableWritercreate, createOrReplace, or replace using
 the given columns or transforms.
 When specified, the table data will be stored by these values for efficient reads.
For example, when a table is partitioned by day, it may be stored in a directory layout like:
table/day=2019-06-01/table/day=2019-06-02/Partitioning is one of the most widely used techniques to optimize physical data layout. It provides a coarse-grained index for skipping unnecessary data reads when queries have predicates on the partitioned columns. In order for partitioning to work well, the number of distinct values in each column should typically be less than tens of thousands.
partitionedBy in interface CreateTableWriter<T>column - (undocumented)columns - (undocumented)public void replace()
CreateTableWriterThe existing table's schema, partition layout, properties, and other configuration will be replaced with the contents of the data frame and the configuration set on this writer.
 If the output table does not exist, this operation will fail with
 CannotReplaceMissingTableException.
 
replace in interface CreateTableWriter<T>public CreateTableWriter<T> tableProperty(String property, String value)
CreateTableWritertableProperty in interface CreateTableWriter<T>property - (undocumented)value - (undocumented)public CreateTableWriter<T> using(String provider)
CreateTableWriterusing in interface CreateTableWriter<T>provider - (undocumented)