Types of events that can be handled by the DAGScheduler.
Interface used to listen for job completion or failure events after submitting a job to the DAGScheduler.
:: DeveloperApi :: A result of a job in the DAGScheduler.
Result returned by a ShuffleMapTask to a scheduler.
An interface for schedulable entities.
An interface to build Schedulable tree buildPools: build the tree nodes(pools) addTaskSetManager: build the leaf nodes(TaskSetManagers)
A backend interface for scheduling systems that allows plugging in different ones under TaskSchedulerImpl.
An interface for sort algorithm FIFO: FIFO algorithm between TaskSetManagers FS: FS algorithm between Pools, and FIFO or FS within Pools
:: DeveloperApi :: Interface for listening to events from the Spark scheduler.
A location where a task should run.
Low-level task scheduler interface, currently implemented exclusively by TaskSchedulerImpl.
:: DeveloperApi :: Information about an
Tracks information about an active job in the DAGScheduler.
A simple listener for application events.
The high-level scheduling layer that implements stage-oriented scheduling.
A TaskResult that contains the task's return value and accumulator updates.
A SparkListener that logs events to persistent storage.
A location that includes both a host and an executor id on that host.
Represents an explanation for a executor or whole slave failing or exiting.
A location on a host that is cached by HDFS.
A location on a host.
A reference to a DirectTaskResult that has been stored in the worker's BlockManager.
:: DeveloperApi :: Parses and holds information about inputFormat (and files) specified as a parameter.
:: DeveloperApi :: A logger class to record runtime information for jobs in Spark.
An object that waits for a DAGScheduler job to complete.
Asynchronously passes SparkListenerEvents to registered SparkListeners.
Authority that decides whether tasks can commit output to HDFS.
An Schedulable entity that represent collection of Pools or TaskSetManagers
A SparkListenerBus that can be used to replay events from serialized event data.
A task that sends back the output to the driver application.
"FAIR" and "FIFO" determines which policy is used to order tasks amongst a Schedulable's sub-queues "NONE" is used when the a Schedulable has no sub-queues.
A ShuffleMapTask divides the elements of an RDD into multiple buckets (based on a partitioner specified in the ShuffleDependency).
Periodic updates from executors.
An internal class that describes the metadata of an event log.
A stage is a set of independent tasks all computing the same function that need to run as part of a Spark job, where all the tasks have the same shuffle dependencies.
:: DeveloperApi :: Stores information about a stage to pass from the scheduler to SparkListeners.
:: DeveloperApi :: Simple SparkListener that logs a few summary statistics when each stage completes
A unit of execution.
Description of a task that gets passed onto executors to be executed, usually created by
:: DeveloperApi :: Information about a running task attempt inside a TaskSet.
Runs a thread pool that deserializes and remotely fetches (if necessary) task results.
Schedules tasks for multiple types of clusters by acting through a SchedulerBackend.
A set of tasks submitted together to the low-level TaskScheduler, usually representing missing partitions of a particular stage.
Schedules the tasks within a single TaskSet in the TaskSchedulerImpl.
Represents free resources available on an executor.