Class TaskContext
- All Implemented Interfaces:
- Serializable
- Direct Known Subclasses:
- BarrierTaskContext
   org.apache.spark.TaskContext.get()
 - See Also:
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionabstract TaskContextAdds a (Java friendly) listener to be executed on task completion.<U> TaskContextaddTaskCompletionListener(scala.Function1<TaskContext, U> f) Adds a listener in the form of a Scala closure to be executed on task completion.abstract TaskContextaddTaskFailureListener(TaskFailureListener listener) Adds a listener to be executed on task failure (which includes completion listener failure, if the task body did not already fail).addTaskFailureListener(scala.Function2<TaskContext, Throwable, scala.runtime.BoxedUnit> f) Adds a listener to be executed on task failure (which includes completion listener failure, if the task body did not already fail).abstract intHow many times this task has been attempted.abstract intcpus()CPUs allocated to the task.static TaskContextget()Return the currently active TaskContext.abstract StringgetLocalProperty(String key) Get a local property set upstream in the driver, or null if it is missing.abstract scala.collection.immutable.Seq<Source>getMetricsSources(String sourceName) ::DeveloperApi:: Returns all metrics sources with the given name which are associated with the instance which runs the task.static intReturns the partition id of currently active TaskContext.abstract booleanReturns true if the task has completed.abstract booleanisFailed()Returns true if the task has failed.abstract booleanReturns true if the task has been killed.abstract intTotal number of partitions in the stage that this task belongs to.abstract intThe ID of the RDD partition that is computed by this task.abstract scala.collection.immutable.Map<String,ResourceInformation> Resources allocated to the task.abstract Map<String,ResourceInformation> (java-specific) Resources allocated to the task.abstract intHow many times the stage that this task belongs to has been attempted.abstract intstageId()The ID of the stage that this task belong to.abstract longAn ID that is unique to this task attempt (within the same SparkContext, no two task attempts will share the same attempt ID).abstract org.apache.spark.executor.TaskMetricsstatic <T> TwithTaskContext(TaskContext context, scala.Function0<T> task) 
- 
Constructor Details- 
TaskContextpublic TaskContext()
 
- 
- 
Method Details- 
getReturn the currently active TaskContext. This can be called inside of user functions to access contextual information about running tasks.- Returns:
- (undocumented)
 
- 
getPartitionIdpublic static int getPartitionId()Returns the partition id of currently active TaskContext. It will return 0 if there is no active TaskContext for cases like local execution.- Returns:
- (undocumented)
 
- 
withTaskContext
- 
isCompletedpublic abstract boolean isCompleted()Returns true if the task has completed.- Returns:
- (undocumented)
 
- 
isFailedpublic abstract boolean isFailed()Returns true if the task has failed.- Returns:
- (undocumented)
 
- 
isInterruptedpublic abstract boolean isInterrupted()Returns true if the task has been killed.- Returns:
- (undocumented)
 
- 
addTaskCompletionListenerAdds a (Java friendly) listener to be executed on task completion. This will be called in all situations - success, failure, or cancellation. Adding a listener to an already completed task will result in that listener being called immediately.Two listeners registered in the same thread will be invoked in reverse order of registration if the task completes after both are registered. There are no ordering guarantees for listeners registered in different threads, or for listeners registered after the task completes. Listeners are guaranteed to execute sequentially. An example use is for HadoopRDD to register a callback to close the input stream. Exceptions thrown by the listener will result in failure of the task. - Parameters:
- listener- (undocumented)
- Returns:
- (undocumented)
 
- 
addTaskCompletionListenerAdds a listener in the form of a Scala closure to be executed on task completion. This will be called in all situations - success, failure, or cancellation. Adding a listener to an already completed task will result in that listener being called immediately.An example use is for HadoopRDD to register a callback to close the input stream. Exceptions thrown by the listener will result in failure of the task. - Parameters:
- f- (undocumented)
- Returns:
- (undocumented)
 
- 
addTaskFailureListenerAdds a listener to be executed on task failure (which includes completion listener failure, if the task body did not already fail). Adding a listener to an already failed task will result in that listener being called immediately.Note: Prior to Spark 3.4.0, failure listeners were only invoked if the main task body failed. - Parameters:
- listener- (undocumented)
- Returns:
- (undocumented)
 
- 
addTaskFailureListenerpublic TaskContext addTaskFailureListener(scala.Function2<TaskContext, Throwable, scala.runtime.BoxedUnit> f) Adds a listener to be executed on task failure (which includes completion listener failure, if the task body did not already fail). Adding a listener to an already failed task will result in that listener being called immediately.Note: Prior to Spark 3.4.0, failure listeners were only invoked if the main task body failed. - Parameters:
- f- (undocumented)
- Returns:
- (undocumented)
 
- 
stageIdpublic abstract int stageId()The ID of the stage that this task belong to.- Returns:
- (undocumented)
 
- 
stageAttemptNumberpublic abstract int stageAttemptNumber()How many times the stage that this task belongs to has been attempted. The first stage attempt will be assigned stageAttemptNumber = 0, and subsequent attempts will have increasing attempt numbers.- Returns:
- (undocumented)
 
- 
partitionIdpublic abstract int partitionId()The ID of the RDD partition that is computed by this task.- Returns:
- (undocumented)
 
- 
numPartitionspublic abstract int numPartitions()Total number of partitions in the stage that this task belongs to.- Returns:
- (undocumented)
 
- 
attemptNumberpublic abstract int attemptNumber()How many times this task has been attempted. The first task attempt will be assigned attemptNumber = 0, and subsequent attempts will have increasing attempt numbers.- Returns:
- (undocumented)
 
- 
taskAttemptIdpublic abstract long taskAttemptId()An ID that is unique to this task attempt (within the same SparkContext, no two task attempts will share the same attempt ID). This is roughly equivalent to Hadoop's TaskAttemptID.- Returns:
- (undocumented)
 
- 
getLocalPropertyGet a local property set upstream in the driver, or null if it is missing. See alsoorg.apache.spark.SparkContext.setLocalProperty.- Parameters:
- key- (undocumented)
- Returns:
- (undocumented)
 
- 
cpuspublic abstract int cpus()CPUs allocated to the task.- Returns:
- (undocumented)
 
- 
resourcesResources allocated to the task. The key is the resource name and the value is information about the resource. Please refer toResourceInformationfor specifics.- Returns:
- (undocumented)
 
- 
resourcesJMap(java-specific) Resources allocated to the task. The key is the resource name and the value is information about the resource. Please refer toResourceInformationfor specifics.- Returns:
- (undocumented)
 
- 
taskMetricspublic abstract org.apache.spark.executor.TaskMetrics taskMetrics()
- 
getMetricsSources::DeveloperApi:: Returns all metrics sources with the given name which are associated with the instance which runs the task. For more information seeorg.apache.spark.metrics.MetricsSystem.- Parameters:
- sourceName- (undocumented)
- Returns:
- (undocumented)
 
 
-