spark

package spark

Core Spark functionality. SparkContext serves as the main entry point to Spark, while RDD is the data type representing a distributed collection, and provides most parallel operations.

In addition, PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; DoubleRDDFunctions contains operations available only on RDDs of Doubles; and SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. These operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)] through implicit conversions when you import spark.SparkContext._.

Visibility
  1. Public
  2. All

Type Members

  1. class Accumulable[R, T] extends Serializable

    A datatype that can be accumulated, i.

  2. trait AccumulableParam[R, T] extends Serializable

    Helper object defining how to accumulate values of a particular type.

  3. class Accumulator[T] extends Accumulable[T, T]

    A simpler value of Accumulable where the result type being accumulated is the same as the types of elements being merged.

  4. trait AccumulatorParam[T] extends AccumulableParam[T, T]

    A simpler version of AccumulableParam where the only datatype you can add in is the same type as the accumulated value.

  5. case class Aggregator[K, V, C](createCombiner: (V) ⇒ C, mergeValue: (C, V) ⇒ C, mergeCombiners: (C, C) ⇒ C) extends Product with Serializable

    A set of functions used to aggregate data.

  6. abstract class Dependency[T] extends Serializable

    Base class for dependencies.

  7. class DoubleRDDFunctions extends Logging with Serializable

    Extra functions available on RDDs of Doubles through an implicit conversion.

  8. class HashPartitioner extends Partitioner

    A Partitioner that implements hash-based partitioning using Java's Object.hashCode.

  9. class JavaSerializer extends Serializer

    A Spark serializer that uses Java's built-in serialization.

  10. trait KryoRegistrator extends AnyRef

    Interface implemented by clients to register their classes with Kryo when using Kryo serialization.

  11. class KryoSerializer extends Serializer with Logging

    A Spark serializer that uses the Kryo 1.x library.

  12. trait Logging extends AnyRef

    Utility trait for classes that want to log data.

  13. abstract class NarrowDependency[T] extends Dependency[T]

    Base class for dependencies where each partition of the parent RDD is used by at most one partition of the child RDD.

  14. class OneToOneDependency[T] extends NarrowDependency[T]

    Represents a one-to-one dependency between partitions of the parent and child RDDs.

  15. class OrderedRDDFunctions[K, V] extends Logging with Serializable

    Extra functions available on RDDs of (key, value) pairs where the key is sortable through an implicit conversion.

  16. class PairRDDFunctions[K, V] extends Logging with HadoopMapReduceUtil with Serializable

    Extra functions available on RDDs of (key, value) pairs through an implicit conversion.

  17. trait Partition extends Serializable

    A partition of an RDD.

  18. abstract class Partitioner extends Serializable

    An object that defines how the elements in a key-value pair RDD are partitioned by key.

  19. abstract class RDD[T] extends Serializable with Logging

    A Resilient Distributed Dataset (RDD), the basic abstraction in Spark.

  20. class RangeDependency[T] extends NarrowDependency[T]

    Represents a one-to-one dependency between ranges of partitions in the parent and child RDDs.

  21. class RangePartitioner[K, V] extends Partitioner

    A Partitioner that partitions sortable records by range into roughly equal ranges.

  22. class SequenceFileRDDFunctions[K, V] extends Logging with Serializable

    Extra functions available on RDDs of (key, value) pairs to create a Hadoop SequenceFile, through an implicit conversion.

  23. class SerializableWritable[T <: Writable] extends Serializable

  24. class ShuffleDependency[K, V] extends Dependency[(K, V)]

    Represents a dependency on the output of a shuffle stage.

  25. class SparkContext extends Logging

    Main entry point for Spark functionality.

  26. class SparkEnv extends AnyRef

    Holds all the runtime environment objects for a running Spark instance (either master or worker), including the serializer, Akka actor system, block manager, map output tracker, etc.

  27. class SparkException extends Exception

  28. class SparkFiles extends AnyRef

  29. class TaskContext extends Serializable

Value Members

  1. object Partitioner extends Serializable

  2. object SparkContext extends AnyRef

    The SparkContext object contains a number of implicit conversions and parameters for use with various Spark features.

  3. object SparkEnv extends Logging

  4. object SparkFiles extends

  5. package api

  6. package broadcast

  7. package common

  8. package deploy

  9. package partial

  10. package rdd

  11. package serializer

  12. package storage

  13. package util