Package org.apache.spark.mapred
Class SparkHadoopMapRedUtil
Object
org.apache.spark.mapred.SparkHadoopMapRedUtil
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionstatic voidcommitTask(org.apache.hadoop.mapreduce.OutputCommitter committer, org.apache.hadoop.mapreduce.TaskAttemptContext mrTaskContext, int jobId, int splitId) Commits a task output.static org.apache.spark.internal.Logging.LogStringContextLogStringContext(scala.StringContext sc) static org.slf4j.Loggerstatic voidorg$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1) 
- 
Constructor Details- 
SparkHadoopMapRedUtilpublic SparkHadoopMapRedUtil()
 
- 
- 
Method Details- 
commitTaskpublic static void commitTask(org.apache.hadoop.mapreduce.OutputCommitter committer, org.apache.hadoop.mapreduce.TaskAttemptContext mrTaskContext, int jobId, int splitId) Commits a task output. Before committing the task output, we need to know whether some other task attempt might be racing to commit the same output partition. Therefore, coordinate with the driver in order to determine whether this attempt can commit (please see SPARK-4879 for details).Output commit coordinator is only used when spark.hadoop.outputCommitCoordination.enabledis set to true (which is the default).- Parameters:
- committer- (undocumented)
- mrTaskContext- (undocumented)
- jobId- (undocumented)
- splitId- (undocumented)
 
- 
org$apache$spark$internal$Logging$$log_public static org.slf4j.Logger org$apache$spark$internal$Logging$$log_()
- 
org$apache$spark$internal$Logging$$log__$eqpublic static void org$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1) 
- 
LogStringContextpublic static org.apache.spark.internal.Logging.LogStringContext LogStringContext(scala.StringContext sc) 
 
-