org.apache

spark

package spark

Core Spark functionality. org.apache.spark.SparkContext serves as the main entry point to Spark, while org.apache.spark.rdd.RDD is the data type representing a distributed collection, and provides most parallel operations.

In addition, org.apache.spark.rdd.PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; org.apache.spark.rdd.DoubleRDDFunctions contains operations available only on RDDs of Doubles; and org.apache.spark.rdd.SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. These operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)] through implicit conversions when you import org.apache.spark.SparkContext._.

Java programmers should reference the spark.api.java package for Spark programming APIs in Java.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. spark
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Type Members

  1. class Accumulable[R, T] extends Serializable

    A data type that can be accumulated, ie has an commutative and associative "add" operation, but where the result type, R, may be different from the element type being added, T.

  2. trait AccumulableParam[R, T] extends Serializable

    Helper object defining how to accumulate values of a particular type.

  3. class Accumulator[T] extends Accumulable[T, T]

    A simpler value of Accumulable where the result type being accumulated is the same as the types of elements being merged, i.

  4. trait AccumulatorParam[T] extends AccumulableParam[T, T]

    A simpler version of org.apache.spark.AccumulableParam where the only data type you can add in is the same type as the accumulated value.

  5. case class Aggregator[K, V, C](createCombiner: (V) ⇒ C, mergeValue: (C, V) ⇒ C, mergeCombiners: (C, C) ⇒ C) extends Product with Serializable

    A set of functions used to aggregate data.

  6. class ComplexFutureAction[T] extends FutureAction[T]

    A FutureAction for actions that could trigger multiple Spark jobs.

  7. abstract class Dependency[T] extends Serializable

    Base class for dependencies.

  8. trait FutureAction[T] extends Future[T]

    A future for the result of an action to support cancellation.

  9. class HashPartitioner extends Partitioner

    A org.apache.spark.Partitioner that implements hash-based partitioning using Java's Object.hashCode.

  10. class InterruptibleIterator[+T] extends Iterator[T]

    An iterator that wraps around an existing iterator to provide task killing functionality.

  11. trait Logging extends AnyRef

    Utility trait for classes that want to log data.

  12. abstract class NarrowDependency[T] extends Dependency[T]

    Base class for dependencies where each partition of the parent RDD is used by at most one partition of the child RDD.

  13. class OneToOneDependency[T] extends NarrowDependency[T]

    Represents a one-to-one dependency between partitions of the parent and child RDDs.

  14. trait Partition extends Serializable

    A partition of an RDD.

  15. abstract class Partitioner extends Serializable

    An object that defines how the elements in a key-value pair RDD are partitioned by key.

  16. class RangeDependency[T] extends NarrowDependency[T]

    Represents a one-to-one dependency between ranges of partitions in the parent and child RDDs.

  17. class RangePartitioner[K, V] extends Partitioner

    A org.apache.spark.Partitioner that partitions sortable records by range into roughly equal ranges.

  18. class SerializableWritable[T <: Writable] extends Serializable

  19. class ShuffleDependency[K, V] extends Dependency[Product2[K, V]]

    Represents a dependency on the output of a shuffle stage.

  20. class SimpleFutureAction[T] extends FutureAction[T]

    A FutureAction holding the result of an action that triggers a single job.

  21. class SparkConf extends Cloneable with Logging

    Configuration for a Spark application.

  22. class SparkContext extends Logging

    Main entry point for Spark functionality.

  23. class SparkEnv extends Logging

    Holds all the runtime environment objects for a running Spark instance (either master or worker), including the serializer, Akka actor system, block manager, map output tracker, etc.

  24. class SparkException extends Exception

  25. class SparkFiles extends AnyRef

  26. class TaskContext extends Serializable

Value Members

  1. object Partitioner extends Serializable

  2. object SparkContext

    The SparkContext object contains a number of implicit conversions and parameters for use with various Spark features.

  3. object SparkEnv extends Logging

  4. package api

  5. package broadcast

    Package for broadcast variables.

  6. package executor

  7. package io

  8. package metrics

  9. package partial

  10. package rdd

  11. package scheduler

  12. package serializer

  13. package storage

  14. package util

Inherited from AnyRef

Inherited from Any

Ungrouped