abstract class DataFrameWriterV2[T] extends CreateTableWriter[T]
Interface used to write a org.apache.spark.sql.Dataset to external storage using the v2 API.
- Annotations
- @Experimental()
- Source
- DataFrameWriterV2.scala
- Since
- 3.0.0 
- Alphabetic
- By Inheritance
- DataFrameWriterV2
- CreateTableWriter
- WriteConfigMethods
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
-  new DataFrameWriterV2()
Abstract Value Members
-   abstract  def append(): UnitAppend the contents of the data frame to the output table. Append the contents of the data frame to the output table. If the output table does not exist, this operation will fail with org.apache.spark.sql.catalyst.analysis.NoSuchTableException. The data frame will be validated to ensure it is compatible with the existing table. - Annotations
- @throws(classOf[NoSuchTableException])
- Exceptions thrown
- org.apache.spark.sql.catalyst.analysis.NoSuchTableExceptionIf the table does not exist
 
-   abstract  def clusterBy(colName: String, colNames: String*): DataFrameWriterV2.this.typeClusters the output by the given columns on the storage. Clusters the output by the given columns on the storage. The rows with matching values in the specified clustering columns will be consolidated within the same group. For instance, if you cluster a dataset by date, the data sharing the same date will be stored together in a file. This arrangement improves query efficiency when you apply selective filters to these clustering columns, thanks to data skipping. - Definition Classes
- DataFrameWriterV2 → CreateTableWriter
- Annotations
- @varargs()
 
-   abstract  def create(): UnitCreate a new table from the contents of the data frame. Create a new table from the contents of the data frame. The new table's schema, partition layout, properties, and other configuration will be based on the configuration set on this writer. If the output table exists, this operation will fail with org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException. - Definition Classes
- CreateTableWriter
- Annotations
- @throws(classOf[TableAlreadyExistsException])
- Exceptions thrown
- org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsExceptionIf the table already exists
 
-   abstract  def createOrReplace(): UnitCreate a new table or replace an existing table with the contents of the data frame. Create a new table or replace an existing table with the contents of the data frame. The output table's schema, partition layout, properties, and other configuration will be based on the contents of the data frame and the configuration set on this writer. If the table exists, its configuration and data will be replaced. - Definition Classes
- CreateTableWriter
 
-   abstract  def option(key: String, value: String): DataFrameWriterV2.this.typeAdd a write option. Add a write option. - Definition Classes
- DataFrameWriterV2 → WriteConfigMethods
 
-   abstract  def options(options: Map[String, String]): DataFrameWriterV2.this.typeAdd write options from a Java Map. Add write options from a Java Map. - Definition Classes
- DataFrameWriterV2 → WriteConfigMethods
 
-   abstract  def options(options: Map[String, String]): DataFrameWriterV2.this.typeAdd write options from a Scala Map. Add write options from a Scala Map. - Definition Classes
- DataFrameWriterV2 → WriteConfigMethods
 
-   abstract  def overwrite(condition: Column): UnitOverwrite rows matching the given filter condition with the contents of the data frame in the output table. Overwrite rows matching the given filter condition with the contents of the data frame in the output table. If the output table does not exist, this operation will fail with org.apache.spark.sql.catalyst.analysis.NoSuchTableException. The data frame will be validated to ensure it is compatible with the existing table. - Annotations
- @throws(classOf[NoSuchTableException])
- Exceptions thrown
- org.apache.spark.sql.catalyst.analysis.NoSuchTableExceptionIf the table does not exist
 
-   abstract  def overwritePartitions(): UnitOverwrite all partition for which the data frame contains at least one row with the contents of the data frame in the output table. Overwrite all partition for which the data frame contains at least one row with the contents of the data frame in the output table. This operation is equivalent to Hive's INSERT OVERWRITE ... PARTITION, which replaces partitions dynamically depending on the contents of the data frame.If the output table does not exist, this operation will fail with org.apache.spark.sql.catalyst.analysis.NoSuchTableException. The data frame will be validated to ensure it is compatible with the existing table. - Annotations
- @throws(classOf[NoSuchTableException])
- Exceptions thrown
- org.apache.spark.sql.catalyst.analysis.NoSuchTableExceptionIf the table does not exist
 
-   abstract  def partitionedBy(column: Column, columns: Column*): DataFrameWriterV2.this.typePartition the output table created by create,createOrReplace, orreplaceusing the given columns or transforms.Partition the output table created by create,createOrReplace, orreplaceusing the given columns or transforms.When specified, the table data will be stored by these values for efficient reads. For example, when a table is partitioned by day, it may be stored in a directory layout like: - table/day=2019-06-01/
- table/day=2019-06-02/
 Partitioning is one of the most widely used techniques to optimize physical data layout. It provides a coarse-grained index for skipping unnecessary data reads when queries have predicates on the partitioned columns. In order for partitioning to work well, the number of distinct values in each column should typically be less than tens of thousands. - Definition Classes
- DataFrameWriterV2 → CreateTableWriter
- Annotations
- @varargs()
 
-   abstract  def replace(): UnitReplace an existing table with the contents of the data frame. Replace an existing table with the contents of the data frame. The existing table's schema, partition layout, properties, and other configuration will be replaced with the contents of the data frame and the configuration set on this writer. If the output table does not exist, this operation will fail with org.apache.spark.sql.catalyst.analysis.CannotReplaceMissingTableException. - Definition Classes
- CreateTableWriter
- Annotations
- @throws(classOf[CannotReplaceMissingTableException])
- Exceptions thrown
- org.apache.spark.sql.catalyst.analysis.CannotReplaceMissingTableExceptionIf the table does not exist
 
-   abstract  def tableProperty(property: String, value: String): DataFrameWriterV2.this.typeAdd a table property. Add a table property. - Definition Classes
- DataFrameWriterV2 → CreateTableWriter
 
-   abstract  def using(provider: String): DataFrameWriterV2.this.typeSpecifies a provider for the underlying output data source. Specifies a provider for the underlying output data source. Spark's default catalog supports "parquet", "json", etc. - Definition Classes
- DataFrameWriterV2 → CreateTableWriter
 
Concrete Value Members
-   final  def !=(arg0: Any): Boolean- Definition Classes
- AnyRef → Any
 
-   final  def ##: Int- Definition Classes
- AnyRef → Any
 
-   final  def ==(arg0: Any): Boolean- Definition Classes
- AnyRef → Any
 
-   final  def asInstanceOf[T0]: T0- Definition Classes
- Any
 
-    def clone(): AnyRef- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
 
-   final  def eq(arg0: AnyRef): Boolean- Definition Classes
- AnyRef
 
-    def equals(arg0: AnyRef): Boolean- Definition Classes
- AnyRef → Any
 
-   final  def getClass(): Class[_ <: AnyRef]- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
 
-    def hashCode(): Int- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
 
-   final  def isInstanceOf[T0]: Boolean- Definition Classes
- Any
 
-   final  def ne(arg0: AnyRef): Boolean- Definition Classes
- AnyRef
 
-   final  def notify(): Unit- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
 
-   final  def notifyAll(): Unit- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
 
-    def option(key: String, value: Double): DataFrameWriterV2.this.typeAdd a double output option. Add a double output option. - Definition Classes
- DataFrameWriterV2 → WriteConfigMethods
 
-    def option(key: String, value: Long): DataFrameWriterV2.this.typeAdd a long output option. Add a long output option. - Definition Classes
- DataFrameWriterV2 → WriteConfigMethods
 
-    def option(key: String, value: Boolean): DataFrameWriterV2.this.typeAdd a boolean output option. Add a boolean output option. - Definition Classes
- DataFrameWriterV2 → WriteConfigMethods
 
-   final  def synchronized[T0](arg0: => T0): T0- Definition Classes
- AnyRef
 
-    def toString(): String- Definition Classes
- AnyRef → Any
 
-   final  def wait(arg0: Long, arg1: Int): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
 
-   final  def wait(arg0: Long): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
 
-   final  def wait(): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
 
Deprecated Value Members
-    def finalize(): Unit- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
- (Since version 9)