public static <K,V> void write(RDD<scala.Tuple2<K,V>> rdd,
Basic work flow of this command is:
1. Driver side setup, prepare the data source and hadoop configuration for the write job to
2. Issues a write job consists of one or more executor side tasks, each of which writes all
rows within an RDD partition.
3. If no exception is thrown in a task, commits that task, otherwise aborts that task; If any
exception is thrown during task commitment, also aborts that task.
4. If all tasks are committed, commit the job, otherwise aborts the job; If any exception is
thrown during job commitment, also aborts the job.
rdd - (undocumented)
config - (undocumented)
evidence$1 - (undocumented)
public static org.slf4j.Logger org$apache$spark$internal$Logging$$log_()
public static void org$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1)