Interface RequiresDistributionAndOrdering
- All Superinterfaces:
Write
- Since:
- 3.2.0
-
Method Summary
Modifier and TypeMethodDescriptiondefault long
Returns the advisory (not guaranteed) shuffle partition size in bytes for this write.default boolean
Returns if the distribution required by this write is strictly required or best effort only.Returns the distribution required by this write.default int
Returns the number of partitions required by this write.Returns the ordering required by this write.Methods inherited from interface org.apache.spark.sql.connector.write.Write
description, supportedCustomMetrics, toBatch, toStreaming
-
Method Details
-
requiredDistribution
Distribution requiredDistribution()Returns the distribution required by this write.Spark will distribute incoming records across partitions to satisfy the required distribution before passing the records to the data source table on write.
Batch and micro-batch writes can request a particular data distribution. If a distribution is requested in the micro-batch context, incoming records in each micro batch will satisfy the required distribution (but not across micro batches). The continuous execution mode continuously processes streaming data and does not support distribution requirements.
Implementations may return
UnspecifiedDistribution
if they don't require any specific distribution of data on write.- Returns:
- the required distribution
-
distributionStrictlyRequired
default boolean distributionStrictlyRequired()Returns if the distribution required by this write is strictly required or best effort only.If true, Spark will strictly distribute incoming records across partitions to satisfy the required distribution before passing the records to the data source table on write. Otherwise, Spark may apply certain optimizations to speed up the query but break the distribution requirement.
- Returns:
- true if the distribution required by this write is strictly required; false otherwise.
-
requiredNumPartitions
default int requiredNumPartitions()Returns the number of partitions required by this write.Implementations may override this to require a specific number of input partitions.
Note that Spark doesn't support the number of partitions on
UnspecifiedDistribution
, the query will fail if the number of partitions are provided but the distribution is unspecified. Data sources may either request a particular number of partitions or a preferred partition size viaadvisoryPartitionSizeInBytes()
, not both.- Returns:
- the required number of partitions, any value less than 1 mean no requirement.
-
advisoryPartitionSizeInBytes
default long advisoryPartitionSizeInBytes()Returns the advisory (not guaranteed) shuffle partition size in bytes for this write.Implementations may override this to indicate the preferable partition size in shuffles performed to satisfy the requested distribution. Note that Spark doesn't support setting the advisory partition size for
UnspecifiedDistribution
, the query will fail if the advisory partition size is set but the distribution is unspecified. Data sources may either request a particular number of partitions viarequiredNumPartitions()
or a preferred partition size, not both.Data sources should be careful with large advisory sizes as it will impact the writing parallelism and may degrade the overall job performance.
Note this value only acts like a guidance and Spark does not guarantee the actual and advisory shuffle partition sizes will match. Ignored if the adaptive execution is disabled.
- Returns:
- the advisory partition size, any value less than 1 means no preference.
-
requiredOrdering
SortOrder[] requiredOrdering()Returns the ordering required by this write.Spark will order incoming records within partitions to satisfy the required ordering before passing those records to the data source table on write.
Batch and micro-batch writes can request a particular data ordering. If an ordering is requested in the micro-batch context, incoming records in each micro batch will satisfy the required ordering (but not across micro batches). The continuous execution mode continuously processes streaming data and does not support ordering requirements.
Implementations may return an empty array if they don't require any specific ordering of data on write.
- Returns:
- the required ordering
-