Packages

c

org.apache.spark.sql

DataFrameWriterV2

final class DataFrameWriterV2[T] extends CreateTableWriter[T]

Interface used to write a org.apache.spark.sql.Dataset to external storage using the v2 API.

Annotations
@Experimental()
Source
DataFrameWriterV2.scala
Since

3.0.0

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DataFrameWriterV2
  2. CreateTableWriter
  3. WriteConfigMethods
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def append(): Unit

    Append the contents of the data frame to the output table.

    Append the contents of the data frame to the output table.

    If the output table does not exist, this operation will fail with org.apache.spark.sql.catalyst.analysis.NoSuchTableException. The data frame will be validated to ensure it is compatible with the existing table.

    Annotations
    @throws(classOf[NoSuchTableException])
    Exceptions thrown

    org.apache.spark.sql.catalyst.analysis.NoSuchTableException If the table does not exist

  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
  7. def create(): Unit

    Create a new table from the contents of the data frame.

    Create a new table from the contents of the data frame.

    The new table's schema, partition layout, properties, and other configuration will be based on the configuration set on this writer.

    If the output table exists, this operation will fail with org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException.

    Definition Classes
    DataFrameWriterV2CreateTableWriter
    Exceptions thrown

    org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException If the table already exists

  8. def createOrReplace(): Unit

    Create a new table or replace an existing table with the contents of the data frame.

    Create a new table or replace an existing table with the contents of the data frame.

    The output table's schema, partition layout, properties, and other configuration will be based on the contents of the data frame and the configuration set on this writer. If the table exists, its configuration and data will be replaced.

    Definition Classes
    DataFrameWriterV2CreateTableWriter
  9. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  10. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  11. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  12. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  13. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  14. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  15. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  16. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  17. def option(key: String, value: String): DataFrameWriterV2[T]

    Add a write option.

    Add a write option.

    Definition Classes
    DataFrameWriterV2WriteConfigMethods
    Since

    3.0.0

  18. def option(key: String, value: Double): CreateTableWriter[T]

    Add a double output option.

    Add a double output option.

    Definition Classes
    WriteConfigMethods
    Since

    3.0.0

  19. def option(key: String, value: Long): CreateTableWriter[T]

    Add a long output option.

    Add a long output option.

    Definition Classes
    WriteConfigMethods
    Since

    3.0.0

  20. def option(key: String, value: Boolean): CreateTableWriter[T]

    Add a boolean output option.

    Add a boolean output option.

    Definition Classes
    WriteConfigMethods
    Since

    3.0.0

  21. def options(options: Map[String, String]): DataFrameWriterV2[T]
  22. def options(options: Map[String, String]): DataFrameWriterV2[T]
  23. def overwrite(condition: Column): Unit

    Overwrite rows matching the given filter condition with the contents of the data frame in the output table.

    Overwrite rows matching the given filter condition with the contents of the data frame in the output table.

    If the output table does not exist, this operation will fail with org.apache.spark.sql.catalyst.analysis.NoSuchTableException. The data frame will be validated to ensure it is compatible with the existing table.

    Annotations
    @throws(classOf[NoSuchTableException])
    Exceptions thrown

    org.apache.spark.sql.catalyst.analysis.NoSuchTableException If the table does not exist

  24. def overwritePartitions(): Unit

    Overwrite all partition for which the data frame contains at least one row with the contents of the data frame in the output table.

    Overwrite all partition for which the data frame contains at least one row with the contents of the data frame in the output table.

    This operation is equivalent to Hive's INSERT OVERWRITE ... PARTITION, which replaces partitions dynamically depending on the contents of the data frame.

    If the output table does not exist, this operation will fail with org.apache.spark.sql.catalyst.analysis.NoSuchTableException. The data frame will be validated to ensure it is compatible with the existing table.

    Annotations
    @throws(classOf[NoSuchTableException])
    Exceptions thrown

    org.apache.spark.sql.catalyst.analysis.NoSuchTableException If the table does not exist

  25. def partitionedBy(column: Column, columns: Column*): CreateTableWriter[T]

    Partition the output table created by create, createOrReplace, or replace using the given columns or transforms.

    Partition the output table created by create, createOrReplace, or replace using the given columns or transforms.

    When specified, the table data will be stored by these values for efficient reads.

    For example, when a table is partitioned by day, it may be stored in a directory layout like:

    • table/day=2019-06-01/
    • table/day=2019-06-02/

    Partitioning is one of the most widely used techniques to optimize physical data layout. It provides a coarse-grained index for skipping unnecessary data reads when queries have predicates on the partitioned columns. In order for partitioning to work well, the number of distinct values in each column should typically be less than tens of thousands.

    Definition Classes
    DataFrameWriterV2CreateTableWriter
    Annotations
    @varargs()
    Since

    3.0.0

  26. def replace(): Unit

    Replace an existing table with the contents of the data frame.

    Replace an existing table with the contents of the data frame.

    The existing table's schema, partition layout, properties, and other configuration will be replaced with the contents of the data frame and the configuration set on this writer.

    If the output table does not exist, this operation will fail with org.apache.spark.sql.catalyst.analysis.CannotReplaceMissingTableException.

    Definition Classes
    DataFrameWriterV2CreateTableWriter
    Exceptions thrown

    org.apache.spark.sql.catalyst.analysis.CannotReplaceMissingTableException If the table does not exist

  27. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  28. def tableProperty(property: String, value: String): CreateTableWriter[T]

    Add a table property.

    Add a table property.

    Definition Classes
    DataFrameWriterV2CreateTableWriter
  29. def toString(): String
    Definition Classes
    AnyRef → Any
  30. def using(provider: String): CreateTableWriter[T]

    Specifies a provider for the underlying output data source.

    Specifies a provider for the underlying output data source. Spark's default catalog supports "parquet", "json", etc.

    Definition Classes
    DataFrameWriterV2CreateTableWriter
    Since

    3.0.0

  31. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  32. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  33. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

    (Since version 9)

Inherited from CreateTableWriter[T]

Inherited from AnyRef

Inherited from Any

Ungrouped