Package org.apache.spark.mapred
Class SparkHadoopMapRedUtil
Object
org.apache.spark.mapred.SparkHadoopMapRedUtil
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionstatic void
commitTask
(org.apache.hadoop.mapreduce.OutputCommitter committer, org.apache.hadoop.mapreduce.TaskAttemptContext mrTaskContext, int jobId, int splitId) Commits a task output.static org.apache.spark.internal.Logging.LogStringContext
LogStringContext
(scala.StringContext sc) static org.slf4j.Logger
static void
org$apache$spark$internal$Logging$$log__$eq
(org.slf4j.Logger x$1)
-
Constructor Details
-
SparkHadoopMapRedUtil
public SparkHadoopMapRedUtil()
-
-
Method Details
-
commitTask
public static void commitTask(org.apache.hadoop.mapreduce.OutputCommitter committer, org.apache.hadoop.mapreduce.TaskAttemptContext mrTaskContext, int jobId, int splitId) Commits a task output. Before committing the task output, we need to know whether some other task attempt might be racing to commit the same output partition. Therefore, coordinate with the driver in order to determine whether this attempt can commit (please see SPARK-4879 for details).Output commit coordinator is only used when
spark.hadoop.outputCommitCoordination.enabled
is set to true (which is the default).- Parameters:
committer
- (undocumented)mrTaskContext
- (undocumented)jobId
- (undocumented)splitId
- (undocumented)
-
org$apache$spark$internal$Logging$$log_
public static org.slf4j.Logger org$apache$spark$internal$Logging$$log_() -
org$apache$spark$internal$Logging$$log__$eq
public static void org$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1) -
LogStringContext
public static org.apache.spark.internal.Logging.LogStringContext LogStringContext(scala.StringContext sc)
-