Package org.apache.spark.scheduler
Interface TaskScheduler
public interface TaskScheduler
Low-level task scheduler interface, currently implemented exclusively by
 
TaskSchedulerImpl.
 This interface allows plugging in different task schedulers. Each TaskScheduler schedules tasks
 for a single SparkContext. These schedulers get sets of tasks submitted to them from the
 DAGScheduler for each stage, and are responsible for sending the tasks to the cluster, running
 them, retrying if there are failures, and mitigating stragglers. They return events to the
 DAGScheduler.- 
Method SummaryModifier and TypeMethodDescriptionscala.Option<String>Get an application's attempt ID associated with the job.Get an application ID associated with the job.intvoidexecutorDecommission(String executorId, org.apache.spark.scheduler.ExecutorDecommissionInfo decommissionInfo) Process a decommissioning executor.booleanexecutorHeartbeatReceived(String execId, scala.Tuple2<Object, scala.collection.immutable.Seq<AccumulatorV2<?, ?>>>[] accumUpdates, BlockManagerId blockManagerId, scala.collection.mutable.Map<scala.Tuple2<Object, Object>, org.apache.spark.executor.ExecutorMetrics> executorUpdates) Update metrics for in-progress tasks and executor metrics, and let the master know that the BlockManager is still alive.voidexecutorLost(String executorId, org.apache.spark.scheduler.ExecutorLossReason reason) Process a lost executorscala.Option<org.apache.spark.scheduler.ExecutorDecommissionState>getExecutorDecommissionState(String executorId) If an executor is decommissioned, return its corresponding decommission infovoidkillAllTaskAttempts(int stageId, boolean interruptThread, String reason) booleankillTaskAttempt(long taskId, boolean interruptThread, String reason) Kills a task attempt.voidnotifyPartitionCompletion(int stageId, int partitionId) voidorg.apache.spark.scheduler.PoolrootPool()scala.Enumeration.ValuevoidsetDAGScheduler(org.apache.spark.scheduler.DAGScheduler dagScheduler) voidstart()voidstop(int exitCode) voidsubmitTasks(org.apache.spark.scheduler.TaskSet taskSet) voidworkerRemoved(String workerId, String host, String message) Process a removed worker
- 
Method Details- 
applicationAttemptIdscala.Option<String> applicationAttemptId()Get an application's attempt ID associated with the job.- Returns:
- An application's Attempt ID
 
- 
applicationIdString applicationId()Get an application ID associated with the job.- Returns:
- An application ID
 
- 
defaultParallelismint defaultParallelism()
- 
executorDecommissionvoid executorDecommission(String executorId, org.apache.spark.scheduler.ExecutorDecommissionInfo decommissionInfo) Process a decommissioning executor.- Parameters:
- executorId- (undocumented)
- decommissionInfo- (undocumented)
 
- 
executorHeartbeatReceivedboolean executorHeartbeatReceived(String execId, scala.Tuple2<Object, scala.collection.immutable.Seq<AccumulatorV2<?, ?>>>[] accumUpdates, BlockManagerId blockManagerId, scala.collection.mutable.Map<scala.Tuple2<Object, Object>, org.apache.spark.executor.ExecutorMetrics> executorUpdates) Update metrics for in-progress tasks and executor metrics, and let the master know that the BlockManager is still alive. Return true if the driver knows about the given block manager. Otherwise, return false, indicating that the block manager should re-register.- Parameters:
- execId- (undocumented)
- accumUpdates- (undocumented)
- blockManagerId- (undocumented)
- executorUpdates- (undocumented)
- Returns:
- (undocumented)
 
- 
executorLostProcess a lost executor- Parameters:
- executorId- (undocumented)
- reason- (undocumented)
 
- 
getExecutorDecommissionStatescala.Option<org.apache.spark.scheduler.ExecutorDecommissionState> getExecutorDecommissionState(String executorId) If an executor is decommissioned, return its corresponding decommission info- Parameters:
- executorId- (undocumented)
- Returns:
- (undocumented)
 
- 
killAllTaskAttempts
- 
killTaskAttemptKills a task attempt. Throw UnsupportedOperationException if the backend doesn't support kill a task.- Parameters:
- taskId- (undocumented)
- interruptThread- (undocumented)
- reason- (undocumented)
- Returns:
- Whether the task was successfully killed.
 
- 
notifyPartitionCompletionvoid notifyPartitionCompletion(int stageId, int partitionId) 
- 
postStartHookvoid postStartHook()
- 
rootPoolorg.apache.spark.scheduler.Pool rootPool()
- 
schedulingModescala.Enumeration.Value schedulingMode()
- 
setDAGSchedulervoid setDAGScheduler(org.apache.spark.scheduler.DAGScheduler dagScheduler) 
- 
startvoid start()
- 
stopvoid stop(int exitCode) 
- 
submitTasksvoid submitTasks(org.apache.spark.scheduler.TaskSet taskSet) 
- 
workerRemovedProcess a removed worker- Parameters:
- workerId- (undocumented)
- host- (undocumented)
- message- (undocumented)
 
 
-