Listener class used when any item has been cleaned by the Cleaner class.
Classes that represent cleaning tasks.
A future for the result of an action to support cancellation.
Handle via which a "run" function passed to a
An identifier for a partition in an RDD.
Exposes information about Spark Executors.
Exposes information about Spark Jobs.
Exposes information about Spark Stages.
:: DeveloperApi :: Various possible reasons why a task ended.
:: DeveloperApi :: Various possible reasons why a task failed.
:: DeveloperApi :: A set of functions used to aggregate data.
:: Experimental :: A
:: Experimental :: Carries all task infos of a barrier task.
A WeakReference associated with a CleanupTask.
:: DeveloperApi :: A TaskContext aware iterator.
For each barrier stage attempt, only at most one barrier() call can be active at any time, thus we can use (stageId, stageAttemptId) to identify the stage attempt where the barrier() call is from.
:: DeveloperApi :: Base class for dependencies.
:: DeveloperApi :: Task failed due to a runtime exception.
:: DeveloperApi :: The task failed because the executor that it was running on was lost.
:: DeveloperApi :: Task failed to fetch shuffle data from a remote node.
A collection of fields and methods concerned with internal accumulators that represent task level metrics.
:: DeveloperApi :: An iterator that wraps around an existing iterator to provide task killing functionality.
:: DeveloperApi :: Base class for dependencies where each partition of the child RDD depends on a small number of partitions of the parent RDD.
:: DeveloperApi :: Represents a one-to-one dependency between partitions of the parent and child RDDs.
An object that defines how the elements in a key-value pair RDD are partitioned by key.
:: DeveloperApi :: Represents a one-to-one dependency between ranges of partitions in the parent and child RDDs.
:: DeveloperApi :: A
|SerializableWritable<T extends org.apache.hadoop.io.Writable>|
:: DeveloperApi :: Represents a dependency on the output of a shuffle stage.
Helper class used by the
Configuration for a Spark application.
Main entry point for Spark functionality.
:: DeveloperApi :: Holds all the runtime environment objects for a running Spark instance (either master or worker), including the serializer, RpcEnv, block manager, map output tracker, etc.
Resolves paths to files added through
Class that allows users to receive all SparkListener events.
A collection of regexes for extracting information from the master string.
Low-level status reporting APIs for monitoring job and stage progress.
:: DeveloperApi :: Task succeeded.
:: DeveloperApi :: Task requested the driver to commit, but was denied.
Contextual information about a task which can be read or mutated during execution.
:: DeveloperApi :: Task was killed intentionally and needs to be rescheduled.
:: DeveloperApi :: The task finished successfully, but the result was lost from the executor's block manager before it was fetched.
An event that SparkContext uses to notify HeartbeatReceiver that SparkContext.taskScheduler is created.
Utilities for tests.
:: DeveloperApi :: We don't know why the task ended -- for example, because of a ClassNotFound exception when deserializing the task result.
:: DeveloperApi :: Exception thrown when a task is explicitly killed (i.e., task failure is expected).
StorageLevel, are also used in Java, but the
org.apache.spark.api.javapackage contains the main Java API.