Packages

t

org.apache.spark.sql.connector.write

RequiresDistributionAndOrdering

trait RequiresDistributionAndOrdering extends Write

A write that requires a specific distribution and ordering of data.

Annotations
@Experimental()
Source
RequiresDistributionAndOrdering.java
Since

3.2.0

Linear Supertypes
Write, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. RequiresDistributionAndOrdering
  2. Write
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Abstract Value Members

  1. abstract def requiredDistribution(): Distribution

    Returns the distribution required by this write.

    Returns the distribution required by this write.

    Spark will distribute incoming records across partitions to satisfy the required distribution before passing the records to the data source table on write.

    Batch and micro-batch writes can request a particular data distribution. If a distribution is requested in the micro-batch context, incoming records in each micro batch will satisfy the required distribution (but not across micro batches). The continuous execution mode continuously processes streaming data and does not support distribution requirements.

    Implementations may return UnspecifiedDistribution if they don't require any specific distribution of data on write.

    returns

    the required distribution

  2. abstract def requiredOrdering(): Array[SortOrder]

    Returns the ordering required by this write.

    Returns the ordering required by this write.

    Spark will order incoming records within partitions to satisfy the required ordering before passing those records to the data source table on write.

    Batch and micro-batch writes can request a particular data ordering. If an ordering is requested in the micro-batch context, incoming records in each micro batch will satisfy the required ordering (but not across micro batches). The continuous execution mode continuously processes streaming data and does not support ordering requirements.

    Implementations may return an empty array if they don't require any specific ordering of data on write.

    returns

    the required ordering

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def advisoryPartitionSizeInBytes(): Long

    Returns the advisory (not guaranteed) shuffle partition size in bytes for this write.

    Returns the advisory (not guaranteed) shuffle partition size in bytes for this write.

    Implementations may override this to indicate the preferable partition size in shuffles performed to satisfy the requested distribution. Note that Spark doesn't support setting the advisory partition size for UnspecifiedDistribution, the query will fail if the advisory partition size is set but the distribution is unspecified. Data sources may either request a particular number of partitions via #requiredNumPartitions() or a preferred partition size, not both.

    Data sources should be careful with large advisory sizes as it will impact the writing parallelism and may degrade the overall job performance.

    Note this value only acts like a guidance and Spark does not guarantee the actual and advisory shuffle partition sizes will match. Ignored if the adaptive execution is disabled.

    returns

    the advisory partition size, any value less than 1 means no preference.

  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
  7. def description(): String

    Returns the description associated with this write.

    Returns the description associated with this write.

    Definition Classes
    Write
  8. def distributionStrictlyRequired(): Boolean

    Returns if the distribution required by this write is strictly required or best effort only.

    Returns if the distribution required by this write is strictly required or best effort only.

    If true, Spark will strictly distribute incoming records across partitions to satisfy the required distribution before passing the records to the data source table on write. Otherwise, Spark may apply certain optimizations to speed up the query but break the distribution requirement.

    returns

    true if the distribution required by this write is strictly required; false otherwise.

  9. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  10. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  11. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  12. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  13. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  14. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  15. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  16. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  17. def requiredNumPartitions(): Int

    Returns the number of partitions required by this write.

    Returns the number of partitions required by this write.

    Implementations may override this to require a specific number of input partitions.

    Note that Spark doesn't support the number of partitions on UnspecifiedDistribution, the query will fail if the number of partitions are provided but the distribution is unspecified. Data sources may either request a particular number of partitions or a preferred partition size via #advisoryPartitionSizeInBytes, not both.

    returns

    the required number of partitions, any value less than 1 mean no requirement.

  18. def supportedCustomMetrics(): Array[CustomMetric]

    Returns an array of supported custom metrics with name and description.

    Returns an array of supported custom metrics with name and description. By default it returns empty array.

    Definition Classes
    Write
  19. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  20. def toBatch(): BatchWrite

    Returns a BatchWrite to write data to batch source.

    Returns a BatchWrite to write data to batch source. By default this method throws exception, data sources must overwrite this method to provide an implementation, if the Table that creates this write returns TableCapability#BATCH_WRITE support in its Table#capabilities().

    Definition Classes
    Write
  21. def toStreaming(): StreamingWrite

    Returns a StreamingWrite to write data to streaming source.

    Returns a StreamingWrite to write data to streaming source. By default this method throws exception, data sources must overwrite this method to provide an implementation, if the Table that creates this write returns TableCapability#STREAMING_WRITE support in its Table#capabilities().

    Definition Classes
    Write
  22. def toString(): String
    Definition Classes
    AnyRef → Any
  23. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  24. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  25. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

    (Since version 9)

Inherited from Write

Inherited from AnyRef

Inherited from Any

Ungrouped