Modifier and Type | Method and Description |
---|---|
default long |
advisoryPartitionSizeInBytes()
Returns the advisory (not guaranteed) shuffle partition size in bytes for this write.
|
default boolean |
distributionStrictlyRequired()
Returns if the distribution required by this write is strictly required or best effort only.
|
Distribution |
requiredDistribution()
Returns the distribution required by this write.
|
default int |
requiredNumPartitions()
Returns the number of partitions required by this write.
|
SortOrder[] |
requiredOrdering()
Returns the ordering required by this write.
|
description, supportedCustomMetrics, toBatch, toStreaming
Distribution requiredDistribution()
Spark will distribute incoming records across partitions to satisfy the required distribution before passing the records to the data source table on write.
Batch and micro-batch writes can request a particular data distribution. If a distribution is requested in the micro-batch context, incoming records in each micro batch will satisfy the required distribution (but not across micro batches). The continuous execution mode continuously processes streaming data and does not support distribution requirements.
Implementations may return UnspecifiedDistribution
if they don't require any specific
distribution of data on write.
default boolean distributionStrictlyRequired()
If true, Spark will strictly distribute incoming records across partitions to satisfy the required distribution before passing the records to the data source table on write. Otherwise, Spark may apply certain optimizations to speed up the query but break the distribution requirement.
default int requiredNumPartitions()
Implementations may override this to require a specific number of input partitions.
Note that Spark doesn't support the number of partitions on UnspecifiedDistribution
,
the query will fail if the number of partitions are provided but the distribution is
unspecified. Data sources may either request a particular number of partitions or
a preferred partition size via advisoryPartitionSizeInBytes()
, not both.
default long advisoryPartitionSizeInBytes()
Implementations may override this to indicate the preferable partition size in shuffles
performed to satisfy the requested distribution. Note that Spark doesn't support setting
the advisory partition size for UnspecifiedDistribution
, the query will fail if
the advisory partition size is set but the distribution is unspecified. Data sources may
either request a particular number of partitions via requiredNumPartitions()
or
a preferred partition size, not both.
Data sources should be careful with large advisory sizes as it will impact the writing parallelism and may degrade the overall job performance.
Note this value only acts like a guidance and Spark does not guarantee the actual and advisory shuffle partition sizes will match. Ignored if the adaptive execution is disabled.
SortOrder[] requiredOrdering()
Spark will order incoming records within partitions to satisfy the required ordering before passing those records to the data source table on write.
Batch and micro-batch writes can request a particular data ordering. If an ordering is requested in the micro-batch context, incoming records in each micro batch will satisfy the required ordering (but not across micro batches). The continuous execution mode continuously processes streaming data and does not support ordering requirements.
Implementations may return an empty array if they don't require any specific ordering of data on write.