Interface TaskScheduler


public interface TaskScheduler
Low-level task scheduler interface, currently implemented exclusively by TaskSchedulerImpl. This interface allows plugging in different task schedulers. Each TaskScheduler schedules tasks for a single SparkContext. These schedulers get sets of tasks submitted to them from the DAGScheduler for each stage, and are responsible for sending the tasks to the cluster, running them, retrying if there are failures, and mitigating stragglers. They return events to the DAGScheduler.
  • Method Details

    • applicationAttemptId

      scala.Option<String> applicationAttemptId()
      Get an application's attempt ID associated with the job.

      Returns:
      An application's Attempt ID
    • applicationId

      String applicationId()
      Get an application ID associated with the job.

      Returns:
      An application ID
    • defaultParallelism

      int defaultParallelism()
    • executorDecommission

      void executorDecommission(String executorId, org.apache.spark.scheduler.ExecutorDecommissionInfo decommissionInfo)
      Process a decommissioning executor.
      Parameters:
      executorId - (undocumented)
      decommissionInfo - (undocumented)
    • executorHeartbeatReceived

      boolean executorHeartbeatReceived(String execId, scala.Tuple2<Object,scala.collection.immutable.Seq<AccumulatorV2<?,?>>>[] accumUpdates, BlockManagerId blockManagerId, scala.collection.mutable.Map<scala.Tuple2<Object,Object>,org.apache.spark.executor.ExecutorMetrics> executorUpdates)
      Update metrics for in-progress tasks and executor metrics, and let the master know that the BlockManager is still alive. Return true if the driver knows about the given block manager. Otherwise, return false, indicating that the block manager should re-register.
      Parameters:
      execId - (undocumented)
      accumUpdates - (undocumented)
      blockManagerId - (undocumented)
      executorUpdates - (undocumented)
      Returns:
      (undocumented)
    • executorLost

      void executorLost(String executorId, org.apache.spark.scheduler.ExecutorLossReason reason)
      Process a lost executor
      Parameters:
      executorId - (undocumented)
      reason - (undocumented)
    • getExecutorDecommissionState

      scala.Option<org.apache.spark.scheduler.ExecutorDecommissionState> getExecutorDecommissionState(String executorId)
      If an executor is decommissioned, return its corresponding decommission info
      Parameters:
      executorId - (undocumented)
      Returns:
      (undocumented)
    • killAllTaskAttempts

      void killAllTaskAttempts(int stageId, boolean interruptThread, String reason)
    • killTaskAttempt

      boolean killTaskAttempt(long taskId, boolean interruptThread, String reason)
      Kills a task attempt. Throw UnsupportedOperationException if the backend doesn't support kill a task.

      Parameters:
      taskId - (undocumented)
      interruptThread - (undocumented)
      reason - (undocumented)
      Returns:
      Whether the task was successfully killed.
    • notifyPartitionCompletion

      void notifyPartitionCompletion(int stageId, int partitionId)
    • postStartHook

      void postStartHook()
    • rootPool

      org.apache.spark.scheduler.Pool rootPool()
    • schedulingMode

      scala.Enumeration.Value schedulingMode()
    • setDAGScheduler

      void setDAGScheduler(org.apache.spark.scheduler.DAGScheduler dagScheduler)
    • start

      void start()
    • stop

      void stop(int exitCode)
    • submitTasks

      void submitTasks(org.apache.spark.scheduler.TaskSet taskSet)
    • workerRemoved

      void workerRemoved(String workerId, String host, String message)
      Process a removed worker
      Parameters:
      workerId - (undocumented)
      host - (undocumented)
      message - (undocumented)