Package org.apache.spark.scheduler
Class TaskInfo
Object
org.apache.spark.scheduler.TaskInfo
:: DeveloperApi ::
Information about a running task attempt inside a TaskSet.
-
Constructor Summary
ConstructorDescriptionTaskInfo
(long taskId, int index, int attemptNumber, int partitionId, long launchTime, String executorId, String host, scala.Enumeration.Value taskLocality, boolean speculative) TaskInfo
(long taskId, int index, int attemptNumber, long launchTime, String executorId, String host, scala.Enumeration.Value taskLocality, boolean speculative) This api doesn't contains partitionId, please use the new api. -
Method Summary
Modifier and TypeMethodDescriptionscala.collection.Seq<AccumulableInfo>
Intermediate updates to accumulables during this task.int
long
duration()
boolean
failed()
boolean
finished()
long
The time when the task has completed successfully (including the time to remotely fetch results, if necessary).boolean
long
The time when the task started remotely getting the result.host()
id()
int
index()
The index of this task within its task set.boolean
killed()
boolean
long
int
The actual RDD partition ID in this task.boolean
running()
boolean
status()
boolean
long
taskId()
scala.Enumeration.Value
-
Constructor Details
-
TaskInfo
-
TaskInfo
public TaskInfo(long taskId, int index, int attemptNumber, long launchTime, String executorId, String host, scala.Enumeration.Value taskLocality, boolean speculative) This api doesn't contains partitionId, please use the new api. Remain it for backward compatibility before Spark 3.3.- Parameters:
taskId
- (undocumented)index
- (undocumented)attemptNumber
- (undocumented)launchTime
- (undocumented)executorId
- (undocumented)host
- (undocumented)taskLocality
- (undocumented)speculative
- (undocumented)
-
-
Method Details
-
accumulables
Intermediate updates to accumulables during this task. Note that it is valid for the same accumulable to be updated multiple times in a single task or for two accumulables with the same name but different IDs to exist in a task.- Returns:
- (undocumented)
-
attemptNumber
public int attemptNumber() -
duration
public long duration() -
executorId
-
failed
public boolean failed() -
finishTime
public long finishTime()The time when the task has completed successfully (including the time to remotely fetch results, if necessary).- Returns:
- (undocumented)
-
finished
public boolean finished() -
gettingResult
public boolean gettingResult() -
gettingResultTime
public long gettingResultTime()The time when the task started remotely getting the result. Will not be set if the task result was sent immediately when the task finished (as opposed to sending an IndirectTaskResult and later fetching the result from the block manager).- Returns:
- (undocumented)
-
host
-
id
-
index
public int index()The index of this task within its task set. Not necessarily the same as the ID of the RDD partition that the task is computing.- Returns:
- (undocumented)
-
killed
public boolean killed() -
launchTime
public long launchTime() -
launching
public boolean launching() -
partitionId
public int partitionId()The actual RDD partition ID in this task. The ID of the RDD partition is always same across task attempts. This will be -1 for historical data, and available for all applications since Spark 3.3.- Returns:
- (undocumented)
-
running
public boolean running() -
speculative
public boolean speculative() -
status
-
successful
public boolean successful() -
taskId
public long taskId() -
taskLocality
public scala.Enumeration.Value taskLocality()
-