Package org.apache.spark.scheduler
Interface TaskScheduler
public interface TaskScheduler
Low-level task scheduler interface, currently implemented exclusively by
TaskSchedulerImpl
.
This interface allows plugging in different task schedulers. Each TaskScheduler schedules tasks
for a single SparkContext. These schedulers get sets of tasks submitted to them from the
DAGScheduler for each stage, and are responsible for sending the tasks to the cluster, running
them, retrying if there are failures, and mitigating stragglers. They return events to the
DAGScheduler.-
Method Summary
Modifier and TypeMethodDescriptionscala.Option<String>
Get an application's attempt ID associated with the job.Get an application ID associated with the job.int
void
executorDecommission
(String executorId, org.apache.spark.scheduler.ExecutorDecommissionInfo decommissionInfo) Process a decommissioning executor.boolean
executorHeartbeatReceived
(String execId, scala.Tuple2<Object, scala.collection.immutable.Seq<AccumulatorV2<?, ?>>>[] accumUpdates, BlockManagerId blockManagerId, scala.collection.mutable.Map<scala.Tuple2<Object, Object>, org.apache.spark.executor.ExecutorMetrics> executorUpdates) Update metrics for in-progress tasks and executor metrics, and let the master know that the BlockManager is still alive.void
executorLost
(String executorId, org.apache.spark.scheduler.ExecutorLossReason reason) Process a lost executorscala.Option<org.apache.spark.scheduler.ExecutorDecommissionState>
getExecutorDecommissionState
(String executorId) If an executor is decommissioned, return its corresponding decommission infovoid
killAllTaskAttempts
(int stageId, boolean interruptThread, String reason) boolean
killTaskAttempt
(long taskId, boolean interruptThread, String reason) Kills a task attempt.void
notifyPartitionCompletion
(int stageId, int partitionId) void
org.apache.spark.scheduler.Pool
rootPool()
scala.Enumeration.Value
void
setDAGScheduler
(org.apache.spark.scheduler.DAGScheduler dagScheduler) void
start()
void
stop
(int exitCode) void
submitTasks
(org.apache.spark.scheduler.TaskSet taskSet) void
workerRemoved
(String workerId, String host, String message) Process a removed worker
-
Method Details
-
applicationAttemptId
scala.Option<String> applicationAttemptId()Get an application's attempt ID associated with the job.- Returns:
- An application's Attempt ID
-
applicationId
String applicationId()Get an application ID associated with the job.- Returns:
- An application ID
-
defaultParallelism
int defaultParallelism() -
executorDecommission
void executorDecommission(String executorId, org.apache.spark.scheduler.ExecutorDecommissionInfo decommissionInfo) Process a decommissioning executor.- Parameters:
executorId
- (undocumented)decommissionInfo
- (undocumented)
-
executorHeartbeatReceived
boolean executorHeartbeatReceived(String execId, scala.Tuple2<Object, scala.collection.immutable.Seq<AccumulatorV2<?, ?>>>[] accumUpdates, BlockManagerId blockManagerId, scala.collection.mutable.Map<scala.Tuple2<Object, Object>, org.apache.spark.executor.ExecutorMetrics> executorUpdates) Update metrics for in-progress tasks and executor metrics, and let the master know that the BlockManager is still alive. Return true if the driver knows about the given block manager. Otherwise, return false, indicating that the block manager should re-register.- Parameters:
execId
- (undocumented)accumUpdates
- (undocumented)blockManagerId
- (undocumented)executorUpdates
- (undocumented)- Returns:
- (undocumented)
-
executorLost
Process a lost executor- Parameters:
executorId
- (undocumented)reason
- (undocumented)
-
getExecutorDecommissionState
scala.Option<org.apache.spark.scheduler.ExecutorDecommissionState> getExecutorDecommissionState(String executorId) If an executor is decommissioned, return its corresponding decommission info- Parameters:
executorId
- (undocumented)- Returns:
- (undocumented)
-
killAllTaskAttempts
-
killTaskAttempt
Kills a task attempt. Throw UnsupportedOperationException if the backend doesn't support kill a task.- Parameters:
taskId
- (undocumented)interruptThread
- (undocumented)reason
- (undocumented)- Returns:
- Whether the task was successfully killed.
-
notifyPartitionCompletion
void notifyPartitionCompletion(int stageId, int partitionId) -
postStartHook
void postStartHook() -
rootPool
org.apache.spark.scheduler.Pool rootPool() -
schedulingMode
scala.Enumeration.Value schedulingMode() -
setDAGScheduler
void setDAGScheduler(org.apache.spark.scheduler.DAGScheduler dagScheduler) -
start
void start() -
stop
void stop(int exitCode) -
submitTasks
void submitTasks(org.apache.spark.scheduler.TaskSet taskSet) -
workerRemoved
Process a removed worker- Parameters:
workerId
- (undocumented)host
- (undocumented)message
- (undocumented)
-