Package org.apache.spark.scheduler
Class TaskInfo
Object
org.apache.spark.scheduler.TaskInfo
- All Implemented Interfaces:
- Cloneable
:: DeveloperApi ::
 Information about a running task attempt inside a TaskSet.
- 
Constructor SummaryConstructorsConstructorDescriptionTaskInfo(long taskId, int index, int attemptNumber, int partitionId, long launchTime, String executorId, String host, scala.Enumeration.Value taskLocality, boolean speculative) TaskInfo(long taskId, int index, int attemptNumber, long launchTime, String executorId, String host, scala.Enumeration.Value taskLocality, boolean speculative) This api doesn't contains partitionId, please use the new api.
- 
Method SummaryModifier and TypeMethodDescriptionscala.collection.immutable.Seq<AccumulableInfo>Intermediate updates to accumulables during this task.intclone()longduration()booleanfailed()booleanfinished()longThe time when the task has completed successfully (including the time to remotely fetch results, if necessary).booleanlongThe time when the task started remotely getting the result.host()id()intindex()The index of this task within its task set.booleankilled()booleanlongintThe actual RDD partition ID in this task.booleanrunning()booleanstatus()booleanlongtaskId()scala.Enumeration.Value
- 
Constructor Details- 
TaskInfo
- 
TaskInfopublic TaskInfo(long taskId, int index, int attemptNumber, long launchTime, String executorId, String host, scala.Enumeration.Value taskLocality, boolean speculative) This api doesn't contains partitionId, please use the new api. Remain it for backward compatibility before Spark 3.3.- Parameters:
- taskId- (undocumented)
- index- (undocumented)
- attemptNumber- (undocumented)
- launchTime- (undocumented)
- executorId- (undocumented)
- host- (undocumented)
- taskLocality- (undocumented)
- speculative- (undocumented)
 
 
- 
- 
Method Details- 
accumulablesIntermediate updates to accumulables during this task. Note that it is valid for the same accumulable to be updated multiple times in a single task or for two accumulables with the same name but different IDs to exist in a task.- Returns:
- (undocumented)
 
- 
attemptNumberpublic int attemptNumber()
- 
clone
- 
durationpublic long duration()
- 
executorId
- 
failedpublic boolean failed()
- 
finishTimepublic long finishTime()The time when the task has completed successfully (including the time to remotely fetch results, if necessary).- Returns:
- (undocumented)
 
- 
finishedpublic boolean finished()
- 
gettingResultpublic boolean gettingResult()
- 
gettingResultTimepublic long gettingResultTime()The time when the task started remotely getting the result. Will not be set if the task result was sent immediately when the task finished (as opposed to sending an IndirectTaskResult and later fetching the result from the block manager).- Returns:
- (undocumented)
 
- 
host
- 
id
- 
indexpublic int index()The index of this task within its task set. Not necessarily the same as the ID of the RDD partition that the task is computing.- Returns:
- (undocumented)
 
- 
killedpublic boolean killed()
- 
launchTimepublic long launchTime()
- 
launchingpublic boolean launching()
- 
partitionIdpublic int partitionId()The actual RDD partition ID in this task. The ID of the RDD partition is always same across task attempts. This will be -1 for historical data, and available for all applications since Spark 3.3.- Returns:
- (undocumented)
 
- 
runningpublic boolean running()
- 
speculativepublic boolean speculative()
- 
status
- 
successfulpublic boolean successful()
- 
taskIdpublic long taskId()
- 
taskLocalitypublic scala.Enumeration.Value taskLocality()
 
-