public interface DataWriterFactory<T>
A factory of DataWriter returned by DataSourceWriter.createWriterFactory(),
which is responsible for creating and initializing the actual data writer at executor side.
Note that, the writer factory will be serialized and sent to executors, then the data writer
will be created on executors and do the actual writing. So DataWriterFactory must be
serializable and DataWriter doesn't need to be.
Returns a data writer to do the actual writing work.
DataWriter<T> createDataWriter(int partitionId,
Returns a data writer to do the actual writing work. Note that, Spark will reuse the same data
object instance when sending data to the data writer, for better performance. Data writers
are responsible for defensive copies if necessary, e.g. copy the data before buffer it in a
If this method fails (by throwing an exception), the action will fail and no Spark job will be
partitionId - A unique id of the RDD partition that the returned writer will process.
Usually Spark processes many RDD partitions at the same time,
implementations should use the partition id to distinguish writers for
taskId - A unique identifier for a task that is performing the write of the partition
data. Spark may run multiple tasks for the same partition (due to speculation
or task failures, for example).
epochId - A monotonically increasing id for streaming queries that are split in to
discrete periods of execution. For non-streaming queries,
this ID will always be 0.