Packages

t

org.apache.spark.sql.connector.write

DeltaWriterFactory

trait DeltaWriterFactory extends DataWriterFactory

Experimental

A factory for creating DeltaWriters returned by DeltaBatchWrite#createBatchWriterFactory(PhysicalWriteInfo), which is responsible for creating and initializing writers at the executor side.

Annotations
@Experimental()
Source
DeltaWriterFactory.java
Since

3.4.0

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DeltaWriterFactory
  2. DataWriterFactory
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Abstract Value Members

  1. abstract def createWriter(partitionId: Int, taskId: Long): DeltaWriter[InternalRow]

    Returns a data writer to do the actual writing work.

    Returns a data writer to do the actual writing work. Note that, Spark will reuse the same data object instance when sending data to the data writer, for better performance. Data writers are responsible for defensive copies if necessary, e.g. copy the data before buffer it in a list.

    If this method fails (by throwing an exception), the corresponding Spark write task would fail and get retried until hitting the maximum retry times.

    partitionId

    A unique id of the RDD partition that the returned writer will process. Usually Spark processes many RDD partitions at the same time, implementations should use the partition id to distinguish writers for different partitions.

    taskId

    The task id returned by TaskContext#taskAttemptId(). Spark may run multiple tasks for the same partition (due to speculation or task failures, for example).

    Definition Classes
    DeltaWriterFactoryDataWriterFactory
    Annotations
    @Override()