| Interface | Description | 
|---|---|
| BarrierCoordinatorMessage | |
| CleanerListener | Listener class used when any item has been cleaned by the Cleaner class. | 
| CleanupTask | Classes that represent cleaning tasks. | 
| FutureAction<T> | A future for the result of an action to support cancellation. | 
| JobSubmitter | Handle via which a "run" function passed to a  ComplexFutureActioncan submit jobs for execution. | 
| MapOutputTrackerMasterMessage | |
| MapOutputTrackerMessage | |
| Partition | An identifier for a partition in an RDD. | 
| PartitionEvaluator<T,U> | An evaluator for computing RDD partitions. | 
| PartitionEvaluatorFactory<T,U> | A factory to create  PartitionEvaluator. | 
| QueryContext | Query context of a  SparkThrowable. | 
| SparkExecutorInfo | Exposes information about Spark Executors. | 
| SparkJobInfo | Exposes information about Spark Jobs. | 
| SparkStageInfo | Exposes information about Spark Stages. | 
| SparkThrowable | Interface mixed into Throwables thrown from Spark. | 
| TaskEndReason | :: DeveloperApi ::
 Various possible reasons why a task ended. | 
| TaskFailedReason | :: DeveloperApi ::
 Various possible reasons why a task failed. | 
| Class | Description | 
|---|---|
| Aggregator<K,V,C> | :: DeveloperApi ::
 A set of functions used to aggregate data. | 
| BarrierTaskContext | :: Experimental ::
 A  TaskContextwith extra contextual info and tooling for tasks in a barrier stage. | 
| BarrierTaskInfo | :: Experimental ::
 Carries all task infos of a barrier task. | 
| CleanAccum | |
| CleanBroadcast | |
| CleanCheckpoint | |
| CleanRDD | |
| CleanShuffle | |
| CleanSparkListener | |
| CleanupTaskWeakReference | A WeakReference associated with a CleanupTask. | 
| ComplexFutureAction<T> | A  FutureActionfor actions that could trigger multiple Spark jobs. | 
| ContextAwareIterator<T> | :: DeveloperApi ::
 A TaskContext aware iterator. | 
| ContextBarrierId | For each barrier stage attempt, only at most one barrier() call can be active at any time, thus
 we can use (stageId, stageAttemptId) to identify the stage attempt where the barrier() call is
 from. | 
| Dependency<T> | :: DeveloperApi ::
 Base class for dependencies. | 
| ErrorClassesJsonReader | A reader to load error information from one or more JSON files. | 
| ErrorInfo | Information associated with an error class. | 
| ErrorMessageFormat | |
| ErrorSubInfo | Information associated with an error subclass. | 
| ExceptionFailure | :: DeveloperApi ::
 Task failed due to a runtime exception. | 
| ExecutorLostFailure | :: DeveloperApi ::
 The task failed because the executor that it was running on was lost. | 
| ExecutorRegistered | |
| ExecutorRemoved | |
| ExpireDeadHosts | |
| FetchFailed | :: DeveloperApi ::
 Task failed to fetch shuffle data from a remote node. | 
| HashPartitioner | A  Partitionerthat implements hash-based partitioning using
 Java'sObject.hashCode. | 
| InternalAccumulator | A collection of fields and methods concerned with internal accumulators that represent
 task level metrics. | 
| InternalAccumulator.input$ | |
| InternalAccumulator.output$ | |
| InternalAccumulator.shuffleRead$ | |
| InternalAccumulator.shuffleWrite$ | |
| InterruptibleIterator<T> | :: DeveloperApi ::
 An iterator that wraps around an existing iterator to provide task killing functionality. | 
| NarrowDependency<T> | :: DeveloperApi ::
 Base class for dependencies where each partition of the child RDD depends on a small number
 of partitions of the parent RDD. | 
| OneToOneDependency<T> | :: DeveloperApi ::
 Represents a one-to-one dependency between partitions of the parent and child RDDs. | 
| Partitioner | An object that defines how the elements in a key-value pair RDD are partitioned by key. | 
| RangeDependency<T> | :: DeveloperApi ::
 Represents a one-to-one dependency between ranges of partitions in the parent and child RDDs. | 
| RangePartitioner<K,V> | A  Partitionerthat partitions sortable records by range into roughly
 equal ranges. | 
| RequestMethod | |
| Resubmitted | :: DeveloperApi ::
 A  org.apache.spark.scheduler.ShuffleMapTaskthat completed successfully earlier, but we
 lost the executor before the stage completed. | 
| SerializableWritable<T extends org.apache.hadoop.io.Writable> | |
| ShuffleDependency<K,V,C> | :: DeveloperApi ::
 Represents a dependency on the output of a shuffle stage. | 
| ShuffleStatus | Helper class used by the  MapOutputTrackerMasterto perform bookkeeping for a single
 ShuffleMapStage. | 
| SimpleFutureAction<T> | A  FutureActionholding the result of an action that triggers a single job. | 
| SparkBuildInfo | |
| SparkConf | Configuration for a Spark application. | 
| SparkContext | Main entry point for Spark functionality. | 
| SparkEnv | :: DeveloperApi ::
 Holds all the runtime environment objects for a running Spark instance (either master or worker),
 including the serializer, RpcEnv, block manager, map output tracker, etc. | 
| SparkExecutorInfoImpl | |
| SparkFiles | Resolves paths to files added through  SparkContext.addFile(). | 
| SparkFirehoseListener | Class that allows users to receive all SparkListener events. | 
| SparkJobInfoImpl | |
| SparkMasterRegex | A collection of regexes for extracting information from the master string. | 
| SparkStageInfoImpl | |
| SparkStatusTracker | Low-level status reporting APIs for monitoring job and stage progress. | 
| SparkThrowableHelper | Companion object used by instances of  SparkThrowableto access error class information and
 construct error messages. | 
| SpillListener | A  SparkListenerthat detects whether spills have occurred in Spark jobs. | 
| StopMapOutputTracker | |
| Success | :: DeveloperApi ::
 Task succeeded. | 
| TaskCommitDenied | :: DeveloperApi ::
 Task requested the driver to commit, but was denied. | 
| TaskContext | Contextual information about a task which can be read or mutated during
 execution. | 
| TaskKilled | :: DeveloperApi ::
 Task was killed intentionally and needs to be rescheduled. | 
| TaskResultLost | :: DeveloperApi ::
 The task finished successfully, but the result was lost from the executor's block manager before
 it was fetched. | 
| TaskSchedulerIsSet | An event that SparkContext uses to notify HeartbeatReceiver that SparkContext.taskScheduler is
 created. | 
| TaskState | |
| TestUtils | Utilities for tests. | 
| UnknownReason | :: DeveloperApi ::
 We don't know why the task ended -- for example, because of a ClassNotFound exception when
 deserializing the task result. | 
| Enum | Description | 
|---|---|
| JobExecutionStatus | 
| Exception | Description | 
|---|---|
| SparkException | |
| TaskKilledException | :: DeveloperApi ::
 Exception thrown when a task is explicitly killed (i.e., task failure is expected). | 
StorageLevel, are also used in Java, but the
 org.apache.spark.api.java package contains the main Java API.