Packages

t

org.apache.spark.sql.connector.write

RequiresDistributionAndOrdering

trait RequiresDistributionAndOrdering extends Write

Experimental

A write that requires a specific distribution and ordering of data.

Annotations
@Experimental()
Source
RequiresDistributionAndOrdering.java
Since

3.2.0

Linear Supertypes
Write, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. RequiresDistributionAndOrdering
  2. Write
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Abstract Value Members

  1. abstract def requiredDistribution(): Distribution

    Returns the distribution required by this write.

    Returns the distribution required by this write.

    Spark will distribute incoming records across partitions to satisfy the required distribution before passing the records to the data source table on write.

    Batch and micro-batch writes can request a particular data distribution. If a distribution is requested in the micro-batch context, incoming records in each micro batch will satisfy the required distribution (but not across micro batches). The continuous execution mode continuously processes streaming data and does not support distribution requirements.

    Implementations may return UnspecifiedDistribution if they don't require any specific distribution of data on write.

    returns

    the required distribution

  2. abstract def requiredOrdering(): Array[SortOrder]

    Returns the ordering required by this write.

    Returns the ordering required by this write.

    Spark will order incoming records within partitions to satisfy the required ordering before passing those records to the data source table on write.

    Batch and micro-batch writes can request a particular data ordering. If an ordering is requested in the micro-batch context, incoming records in each micro batch will satisfy the required ordering (but not across micro batches). The continuous execution mode continuously processes streaming data and does not support ordering requirements.

    Implementations may return an empty array if they don't require any specific ordering of data on write.

    returns

    the required ordering

Concrete Value Members

  1. def advisoryPartitionSizeInBytes(): Long

    Returns the advisory (not guaranteed) shuffle partition size in bytes for this write.

    Returns the advisory (not guaranteed) shuffle partition size in bytes for this write.

    Implementations may override this to indicate the preferable partition size in shuffles performed to satisfy the requested distribution. Note that Spark doesn't support setting the advisory partition size for UnspecifiedDistribution, the query will fail if the advisory partition size is set but the distribution is unspecified. Data sources may either request a particular number of partitions via #requiredNumPartitions() or a preferred partition size, not both.

    Data sources should be careful with large advisory sizes as it will impact the writing parallelism and may degrade the overall job performance.

    Note this value only acts like a guidance and Spark does not guarantee the actual and advisory shuffle partition sizes will match. Ignored if the adaptive execution is disabled.

    returns

    the advisory partition size, any value less than 1 means no preference.

  2. def description(): String

    Returns the description associated with this write.

    Returns the description associated with this write.

    Definition Classes
    Write
  3. def distributionStrictlyRequired(): Boolean

    Returns if the distribution required by this write is strictly required or best effort only.

    Returns if the distribution required by this write is strictly required or best effort only.

    If true, Spark will strictly distribute incoming records across partitions to satisfy the required distribution before passing the records to the data source table on write. Otherwise, Spark may apply certain optimizations to speed up the query but break the distribution requirement.

    returns

    true if the distribution required by this write is strictly required; false otherwise.

  4. def requiredNumPartitions(): Int

    Returns the number of partitions required by this write.

    Returns the number of partitions required by this write.

    Implementations may override this to require a specific number of input partitions.

    Note that Spark doesn't support the number of partitions on UnspecifiedDistribution, the query will fail if the number of partitions are provided but the distribution is unspecified. Data sources may either request a particular number of partitions or a preferred partition size via #advisoryPartitionSizeInBytes, not both.

    returns

    the required number of partitions, any value less than 1 mean no requirement.

  5. def supportedCustomMetrics(): Array[CustomMetric]

    Returns an array of supported custom metrics with name and description.

    Returns an array of supported custom metrics with name and description. By default it returns empty array.

    Definition Classes
    Write
  6. def toBatch(): BatchWrite

    Returns a BatchWrite to write data to batch source.

    Returns a BatchWrite to write data to batch source. By default this method throws exception, data sources must overwrite this method to provide an implementation, if the Table that creates this write returns TableCapability#BATCH_WRITE support in its Table#capabilities().

    Definition Classes
    Write
  7. def toStreaming(): StreamingWrite

    Returns a StreamingWrite to write data to streaming source.

    Returns a StreamingWrite to write data to streaming source. By default this method throws exception, data sources must overwrite this method to provide an implementation, if the Table that creates this write returns TableCapability#STREAMING_WRITE support in its Table#capabilities().

    Definition Classes
    Write