Class/Object

org.apache.spark.api.java

JavaPairRDD

Related Docs: object JavaPairRDD | package java

Permalink

class JavaPairRDD[K, V] extends AbstractJavaRDDLike[(K, V), JavaPairRDD[K, V]]

Source
JavaPairRDD.scala
Linear Supertypes
AbstractJavaRDDLike[(K, V), JavaPairRDD[K, V]], JavaRDDLike[(K, V), JavaPairRDD[K, V]], Serializable, Serializable, AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. JavaPairRDD
  2. AbstractJavaRDDLike
  3. JavaRDDLike
  4. Serializable
  5. Serializable
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new JavaPairRDD(rdd: RDD[(K, V)])(implicit kClassTag: ClassTag[K], vClassTag: ClassTag[V])

    Permalink

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def aggregate[U](zeroValue: U)(seqOp: Function2[U, (K, V), U], combOp: Function2[U, U, U]): U

    Permalink

    Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value".

    Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of this RDD, T. Thus, we need one operation for merging a T into an U and one operation for merging two U's, as in scala.TraversableOnce. Both of these functions are allowed to modify and return their first argument instead of creating a new U to avoid memory allocation.

    Definition Classes
    JavaRDDLike
  5. def aggregateByKey[U](zeroValue: U, seqFunc: Function2[U, V, U], combFunc: Function2[U, U, U]): JavaPairRDD[K, U]

    Permalink

    Aggregate the values of each key, using given combine functions and a neutral "zero value".

    Aggregate the values of each key, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of the values in this RDD, V. Thus, we need one operation for merging a V into a U and one operation for merging two U's. The former operation is used for merging values within a partition, and the latter is used for merging values between partitions. To avoid memory allocation, both of these functions are allowed to modify and return their first argument instead of creating a new U.

  6. def aggregateByKey[U](zeroValue: U, numPartitions: Int, seqFunc: Function2[U, V, U], combFunc: Function2[U, U, U]): JavaPairRDD[K, U]

    Permalink

    Aggregate the values of each key, using given combine functions and a neutral "zero value".

    Aggregate the values of each key, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of the values in this RDD, V. Thus, we need one operation for merging a V into a U and one operation for merging two U's, as in scala.TraversableOnce. The former operation is used for merging values within a partition, and the latter is used for merging values between partitions. To avoid memory allocation, both of these functions are allowed to modify and return their first argument instead of creating a new U.

  7. def aggregateByKey[U](zeroValue: U, partitioner: Partitioner, seqFunc: Function2[U, V, U], combFunc: Function2[U, U, U]): JavaPairRDD[K, U]

    Permalink

    Aggregate the values of each key, using given combine functions and a neutral "zero value".

    Aggregate the values of each key, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of the values in this RDD, V. Thus, we need one operation for merging a V into a U and one operation for merging two U's, as in scala.TraversableOnce. The former operation is used for merging values within a partition, and the latter is used for merging values between partitions. To avoid memory allocation, both of these functions are allowed to modify and return their first argument instead of creating a new U.

  8. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  9. def cache(): JavaPairRDD[K, V]

    Permalink

    Persist this RDD with the default storage level (MEMORY_ONLY).

  10. def cartesian[U](other: JavaRDDLike[U, _]): JavaPairRDD[(K, V), U]

    Permalink

    Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in this and b is in other.

    Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in this and b is in other.

    Definition Classes
    JavaRDDLike
  11. def checkpoint(): Unit

    Permalink

    Mark this RDD for checkpointing.

    Mark this RDD for checkpointing. It will be saved to a file inside the checkpoint directory set with SparkContext.setCheckpointDir() and all references to its parent RDDs will be removed. This function must be called before any job has been executed on this RDD. It is strongly recommended that this RDD is persisted in memory, otherwise saving it on a file will require recomputation.

    Definition Classes
    JavaRDDLike
  12. val classTag: ClassTag[(K, V)]

    Permalink
    Definition Classes
    JavaPairRDDJavaRDDLike
  13. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  14. def coalesce(numPartitions: Int, shuffle: Boolean): JavaPairRDD[K, V]

    Permalink

    Return a new RDD that is reduced into numPartitions partitions.

  15. def coalesce(numPartitions: Int): JavaPairRDD[K, V]

    Permalink

    Return a new RDD that is reduced into numPartitions partitions.

  16. def cogroup[W1, W2, W3](other1: JavaPairRDD[K, W1], other2: JavaPairRDD[K, W2], other3: JavaPairRDD[K, W3], numPartitions: Int): JavaPairRDD[K, (Iterable[V], Iterable[W1], Iterable[W2], Iterable[W3])]

    Permalink

    For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.

  17. def cogroup[W1, W2](other1: JavaPairRDD[K, W1], other2: JavaPairRDD[K, W2], numPartitions: Int): JavaPairRDD[K, (Iterable[V], Iterable[W1], Iterable[W2])]

    Permalink

    For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.

  18. def cogroup[W](other: JavaPairRDD[K, W], numPartitions: Int): JavaPairRDD[K, (Iterable[V], Iterable[W])]

    Permalink

    For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.

  19. def cogroup[W1, W2, W3](other1: JavaPairRDD[K, W1], other2: JavaPairRDD[K, W2], other3: JavaPairRDD[K, W3]): JavaPairRDD[K, (Iterable[V], Iterable[W1], Iterable[W2], Iterable[W3])]

    Permalink

    For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.

  20. def cogroup[W1, W2](other1: JavaPairRDD[K, W1], other2: JavaPairRDD[K, W2]): JavaPairRDD[K, (Iterable[V], Iterable[W1], Iterable[W2])]

    Permalink

    For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.

  21. def cogroup[W](other: JavaPairRDD[K, W]): JavaPairRDD[K, (Iterable[V], Iterable[W])]

    Permalink

    For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.

  22. def cogroup[W1, W2, W3](other1: JavaPairRDD[K, W1], other2: JavaPairRDD[K, W2], other3: JavaPairRDD[K, W3], partitioner: Partitioner): JavaPairRDD[K, (Iterable[V], Iterable[W1], Iterable[W2], Iterable[W3])]

    Permalink

    For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.

  23. def cogroup[W1, W2](other1: JavaPairRDD[K, W1], other2: JavaPairRDD[K, W2], partitioner: Partitioner): JavaPairRDD[K, (Iterable[V], Iterable[W1], Iterable[W2])]

    Permalink

    For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.

  24. def cogroup[W](other: JavaPairRDD[K, W], partitioner: Partitioner): JavaPairRDD[K, (Iterable[V], Iterable[W])]

    Permalink

    For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.

  25. def collect(): List[(K, V)]

    Permalink

    Return an array that contains all of the elements in this RDD.

    Return an array that contains all of the elements in this RDD.

    Definition Classes
    JavaRDDLike
    Note

    this method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.

  26. def collectAsMap(): Map[K, V]

    Permalink

    Return the key-value pairs in this RDD to the master as a Map.

    Return the key-value pairs in this RDD to the master as a Map.

    Note

    this method should only be used if the resulting data is expected to be small, as all the data is loaded into the driver's memory.

  27. def collectAsync(): JavaFutureAction[List[(K, V)]]

    Permalink

    The asynchronous version of collect, which returns a future for retrieving an array containing all of the elements in this RDD.

    The asynchronous version of collect, which returns a future for retrieving an array containing all of the elements in this RDD.

    Definition Classes
    JavaRDDLike
    Note

    this method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.

  28. def collectPartitions(partitionIds: Array[Int]): Array[List[(K, V)]]

    Permalink

    Return an array that contains all of the elements in a specific partition of this RDD.

    Return an array that contains all of the elements in a specific partition of this RDD.

    Definition Classes
    JavaRDDLike
  29. def combineByKey[C](createCombiner: Function[V, C], mergeValue: Function2[C, V, C], mergeCombiners: Function2[C, C, C]): JavaPairRDD[K, C]

    Permalink

    Simplified version of combineByKey that hash-partitions the resulting RDD using the existing partitioner/parallelism level and using map-side aggregation.

  30. def combineByKey[C](createCombiner: Function[V, C], mergeValue: Function2[C, V, C], mergeCombiners: Function2[C, C, C], numPartitions: Int): JavaPairRDD[K, C]

    Permalink

    Simplified version of combineByKey that hash-partitions the output RDD and uses map-side aggregation.

  31. def combineByKey[C](createCombiner: Function[V, C], mergeValue: Function2[C, V, C], mergeCombiners: Function2[C, C, C], partitioner: Partitioner): JavaPairRDD[K, C]

    Permalink

    Generic function to combine the elements for each key using a custom set of aggregation functions.

    Generic function to combine the elements for each key using a custom set of aggregation functions. Turns a JavaPairRDD[(K, V)] into a result of type JavaPairRDD[(K, C)], for a "combined type" C.

    Users provide three functions:

    • createCombiner, which turns a V into a C (e.g., creates a one-element list)
    • mergeValue, to merge a V into a C (e.g., adds it to the end of a list)
    • mergeCombiners, to combine two C's into a single one.

    In addition, users can control the partitioning of the output RDD. This method automatically uses map-side aggregation in shuffling the RDD.

    Note

    V and C can be different -- for example, one might group an RDD of type (Int, Int) into an RDD of type (Int, List[Int]).

  32. def combineByKey[C](createCombiner: Function[V, C], mergeValue: Function2[C, V, C], mergeCombiners: Function2[C, C, C], partitioner: Partitioner, mapSideCombine: Boolean, serializer: Serializer): JavaPairRDD[K, C]

    Permalink

    Generic function to combine the elements for each key using a custom set of aggregation functions.

    Generic function to combine the elements for each key using a custom set of aggregation functions. Turns a JavaPairRDD[(K, V)] into a result of type JavaPairRDD[(K, C)], for a "combined type" C.

    Users provide three functions:

    • createCombiner, which turns a V into a C (e.g., creates a one-element list)
    • mergeValue, to merge a V into a C (e.g., adds it to the end of a list)
    • mergeCombiners, to combine two C's into a single one.

    In addition, users can control the partitioning of the output RDD, the serializer that is use for the shuffle, and whether to perform map-side aggregation (if a mapper can produce multiple items with the same key).

    Note

    V and C can be different -- for example, one might group an RDD of type (Int, Int) into an RDD of type (Int, List[Int]).

  33. def context: SparkContext

    Permalink

    The org.apache.spark.SparkContext that this RDD was created on.

    The org.apache.spark.SparkContext that this RDD was created on.

    Definition Classes
    JavaRDDLike
  34. def count(): Long

    Permalink

    Return the number of elements in the RDD.

    Return the number of elements in the RDD.

    Definition Classes
    JavaRDDLike
  35. def countApprox(timeout: Long): PartialResult[BoundedDouble]

    Permalink

    Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.

    Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.

    timeout

    maximum time to wait for the job, in milliseconds

    Definition Classes
    JavaRDDLike
  36. def countApprox(timeout: Long, confidence: Double): PartialResult[BoundedDouble]

    Permalink

    Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.

    Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.

    The confidence is the probability that the error bounds of the result will contain the true value. That is, if countApprox were called repeatedly with confidence 0.9, we would expect 90% of the results to contain the true count. The confidence must be in the range [0,1] or an exception will be thrown.

    timeout

    maximum time to wait for the job, in milliseconds

    confidence

    the desired statistical confidence in the result

    returns

    a potentially incomplete result, with error bounds

    Definition Classes
    JavaRDDLike
  37. def countApproxDistinct(relativeSD: Double): Long

    Permalink

    Return approximate number of distinct elements in the RDD.

    Return approximate number of distinct elements in the RDD.

    The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.

    relativeSD

    Relative accuracy. Smaller values create counters that require more space. It must be greater than 0.000017.

    Definition Classes
    JavaRDDLike
  38. def countApproxDistinctByKey(relativeSD: Double): JavaPairRDD[K, Long]

    Permalink

    Return approximate number of distinct values for each key in this RDD.

    Return approximate number of distinct values for each key in this RDD.

    The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.

    relativeSD

    Relative accuracy. Smaller values create counters that require more space. It must be greater than 0.000017.

  39. def countApproxDistinctByKey(relativeSD: Double, numPartitions: Int): JavaPairRDD[K, Long]

    Permalink

    Return approximate number of distinct values for each key in this RDD.

    Return approximate number of distinct values for each key in this RDD.

    The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.

    relativeSD

    Relative accuracy. Smaller values create counters that require more space. It must be greater than 0.000017.

    numPartitions

    number of partitions of the resulting RDD.

  40. def countApproxDistinctByKey(relativeSD: Double, partitioner: Partitioner): JavaPairRDD[K, Long]

    Permalink

    Return approximate number of distinct values for each key in this RDD.

    Return approximate number of distinct values for each key in this RDD.

    The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.

    relativeSD

    Relative accuracy. Smaller values create counters that require more space. It must be greater than 0.000017.

    partitioner

    partitioner of the resulting RDD.

  41. def countAsync(): JavaFutureAction[Long]

    Permalink

    The asynchronous version of count, which returns a future for counting the number of elements in this RDD.

    The asynchronous version of count, which returns a future for counting the number of elements in this RDD.

    Definition Classes
    JavaRDDLike
  42. def countByKey(): Map[K, Long]

    Permalink

    Count the number of elements for each key, and return the result to the master as a Map.

  43. def countByKeyApprox(timeout: Long, confidence: Double = 0.95): PartialResult[Map[K, BoundedDouble]]

    Permalink

    Approximate version of countByKey that can return a partial result if it does not finish within a timeout.

  44. def countByKeyApprox(timeout: Long): PartialResult[Map[K, BoundedDouble]]

    Permalink

    Approximate version of countByKey that can return a partial result if it does not finish within a timeout.

  45. def countByValue(): Map[(K, V), Long]

    Permalink

    Return the count of each unique value in this RDD as a map of (value, count) pairs.

    Return the count of each unique value in this RDD as a map of (value, count) pairs. The final combine step happens locally on the master, equivalent to running a single reduce task.

    Definition Classes
    JavaRDDLike
  46. def countByValueApprox(timeout: Long): PartialResult[Map[(K, V), BoundedDouble]]

    Permalink

    Approximate version of countByValue().

    Approximate version of countByValue().

    timeout

    maximum time to wait for the job, in milliseconds

    returns

    a potentially incomplete result, with error bounds

    Definition Classes
    JavaRDDLike
  47. def countByValueApprox(timeout: Long, confidence: Double): PartialResult[Map[(K, V), BoundedDouble]]

    Permalink

    Approximate version of countByValue().

    Approximate version of countByValue().

    The confidence is the probability that the error bounds of the result will contain the true value. That is, if countApprox were called repeatedly with confidence 0.9, we would expect 90% of the results to contain the true count. The confidence must be in the range [0,1] or an exception will be thrown.

    timeout

    maximum time to wait for the job, in milliseconds

    confidence

    the desired statistical confidence in the result

    returns

    a potentially incomplete result, with error bounds

    Definition Classes
    JavaRDDLike
  48. def distinct(numPartitions: Int): JavaPairRDD[K, V]

    Permalink

    Return a new RDD containing the distinct elements in this RDD.

  49. def distinct(): JavaPairRDD[K, V]

    Permalink

    Return a new RDD containing the distinct elements in this RDD.

  50. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  51. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  52. def filter(f: Function[(K, V), Boolean]): JavaPairRDD[K, V]

    Permalink

    Return a new RDD containing only the elements that satisfy a predicate.

  53. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  54. def first(): (K, V)

    Permalink

    Return the first element in this RDD.

    Return the first element in this RDD.

    Definition Classes
    JavaPairRDDJavaRDDLike
  55. def flatMap[U](f: FlatMapFunction[(K, V), U]): JavaRDD[U]

    Permalink

    Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.

    Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.

    Definition Classes
    JavaRDDLike
  56. def flatMapToDouble(f: DoubleFlatMapFunction[(K, V)]): JavaDoubleRDD

    Permalink

    Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.

    Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.

    Definition Classes
    JavaRDDLike
  57. def flatMapToPair[K2, V2](f: PairFlatMapFunction[(K, V), K2, V2]): JavaPairRDD[K2, V2]

    Permalink

    Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.

    Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.

    Definition Classes
    JavaRDDLike
  58. def flatMapValues[U](f: Function[V, Iterable[U]]): JavaPairRDD[K, U]

    Permalink

    Pass each value in the key-value pair RDD through a flatMap function without changing the keys; this also retains the original RDD's partitioning.

  59. def fold(zeroValue: (K, V))(f: Function2[(K, V), (K, V), (K, V)]): (K, V)

    Permalink

    Aggregate the elements of each partition, and then the results for all the partitions, using a given associative function and a neutral "zero value".

    Aggregate the elements of each partition, and then the results for all the partitions, using a given associative function and a neutral "zero value". The function op(t1, t2) is allowed to modify t1 and return it as its result value to avoid object allocation; however, it should not modify t2.

    This behaves somewhat differently from fold operations implemented for non-distributed collections in functional languages like Scala. This fold operation may be applied to partitions individually, and then fold those results into the final result, rather than apply the fold to each element sequentially in some defined ordering. For functions that are not commutative, the result may differ from that of a fold applied to a non-distributed collection.

    Definition Classes
    JavaRDDLike
  60. def foldByKey(zeroValue: V, func: Function2[V, V, V]): JavaPairRDD[K, V]

    Permalink

    Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).

  61. def foldByKey(zeroValue: V, numPartitions: Int, func: Function2[V, V, V]): JavaPairRDD[K, V]

    Permalink

    Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).

  62. def foldByKey(zeroValue: V, partitioner: Partitioner, func: Function2[V, V, V]): JavaPairRDD[K, V]

    Permalink

    Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).

  63. def foreach(f: VoidFunction[(K, V)]): Unit

    Permalink

    Applies a function f to all elements of this RDD.

    Applies a function f to all elements of this RDD.

    Definition Classes
    JavaRDDLike
  64. def foreachAsync(f: VoidFunction[(K, V)]): JavaFutureAction[Void]

    Permalink

    The asynchronous version of the foreach action, which applies a function f to all the elements of this RDD.

    The asynchronous version of the foreach action, which applies a function f to all the elements of this RDD.

    Definition Classes
    JavaRDDLike
  65. def foreachPartition(f: VoidFunction[Iterator[(K, V)]]): Unit

    Permalink

    Applies a function f to each partition of this RDD.

    Applies a function f to each partition of this RDD.

    Definition Classes
    JavaRDDLike
  66. def foreachPartitionAsync(f: VoidFunction[Iterator[(K, V)]]): JavaFutureAction[Void]

    Permalink

    The asynchronous version of the foreachPartition action, which applies a function f to each partition of this RDD.

    The asynchronous version of the foreachPartition action, which applies a function f to each partition of this RDD.

    Definition Classes
    JavaRDDLike
  67. def fullOuterJoin[W](other: JavaPairRDD[K, W], numPartitions: Int): JavaPairRDD[K, (Optional[V], Optional[W])]

    Permalink

    Perform a full outer join of this and other.

    Perform a full outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for w in other, or the pair (k, (Some(v), None)) if no elements in other have key k. Similarly, for each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for v in this, or the pair (k, (None, Some(w))) if no elements in this have key k. Hash-partitions the resulting RDD into the given number of partitions.

  68. def fullOuterJoin[W](other: JavaPairRDD[K, W]): JavaPairRDD[K, (Optional[V], Optional[W])]

    Permalink

    Perform a full outer join of this and other.

    Perform a full outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for w in other, or the pair (k, (Some(v), None)) if no elements in other have key k. Similarly, for each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for v in this, or the pair (k, (None, Some(w))) if no elements in this have key k. Hash-partitions the resulting RDD using the existing partitioner/ parallelism level.

  69. def fullOuterJoin[W](other: JavaPairRDD[K, W], partitioner: Partitioner): JavaPairRDD[K, (Optional[V], Optional[W])]

    Permalink

    Perform a full outer join of this and other.

    Perform a full outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for w in other, or the pair (k, (Some(v), None)) if no elements in other have key k. Similarly, for each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for v in this, or the pair (k, (None, Some(w))) if no elements in this have key k. Uses the given Partitioner to partition the output RDD.

  70. def getCheckpointFile(): Optional[String]

    Permalink

    Gets the name of the file to which this RDD was checkpointed

    Gets the name of the file to which this RDD was checkpointed

    Definition Classes
    JavaRDDLike
  71. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  72. def getNumPartitions: Int

    Permalink

    Return the number of partitions in this RDD.

    Return the number of partitions in this RDD.

    Definition Classes
    JavaRDDLike
    Annotations
    @Since( "1.6.0" )
  73. def getStorageLevel: StorageLevel

    Permalink

    Get the RDD's current storage level, or StorageLevel.NONE if none is set.

    Get the RDD's current storage level, or StorageLevel.NONE if none is set.

    Definition Classes
    JavaRDDLike
  74. def glom(): JavaRDD[List[(K, V)]]

    Permalink

    Return an RDD created by coalescing all elements within each partition into an array.

    Return an RDD created by coalescing all elements within each partition into an array.

    Definition Classes
    JavaRDDLike
  75. def groupBy[U](f: Function[(K, V), U], numPartitions: Int): JavaPairRDD[U, Iterable[(K, V)]]

    Permalink

    Return an RDD of grouped elements.

    Return an RDD of grouped elements. Each group consists of a key and a sequence of elements mapping to that key.

    Definition Classes
    JavaRDDLike
  76. def groupBy[U](f: Function[(K, V), U]): JavaPairRDD[U, Iterable[(K, V)]]

    Permalink

    Return an RDD of grouped elements.

    Return an RDD of grouped elements. Each group consists of a key and a sequence of elements mapping to that key.

    Definition Classes
    JavaRDDLike
  77. def groupByKey(): JavaPairRDD[K, Iterable[V]]

    Permalink

    Group the values for each key in the RDD into a single sequence.

    Group the values for each key in the RDD into a single sequence. Hash-partitions the resulting RDD with the existing partitioner/parallelism level.

    Note

    If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using JavaPairRDD.reduceByKey or JavaPairRDD.combineByKey will provide much better performance.

  78. def groupByKey(numPartitions: Int): JavaPairRDD[K, Iterable[V]]

    Permalink

    Group the values for each key in the RDD into a single sequence.

    Group the values for each key in the RDD into a single sequence. Hash-partitions the resulting RDD with into numPartitions partitions.

    Note

    If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using JavaPairRDD.reduceByKey or JavaPairRDD.combineByKey will provide much better performance.

  79. def groupByKey(partitioner: Partitioner): JavaPairRDD[K, Iterable[V]]

    Permalink

    Group the values for each key in the RDD into a single sequence.

    Group the values for each key in the RDD into a single sequence. Allows controlling the partitioning of the resulting key-value pair RDD by passing a Partitioner.

    Note

    If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using JavaPairRDD.reduceByKey or JavaPairRDD.combineByKey will provide much better performance.

  80. def groupWith[W1, W2, W3](other1: JavaPairRDD[K, W1], other2: JavaPairRDD[K, W2], other3: JavaPairRDD[K, W3]): JavaPairRDD[K, (Iterable[V], Iterable[W1], Iterable[W2], Iterable[W3])]

    Permalink

    Alias for cogroup.

  81. def groupWith[W1, W2](other1: JavaPairRDD[K, W1], other2: JavaPairRDD[K, W2]): JavaPairRDD[K, (Iterable[V], Iterable[W1], Iterable[W2])]

    Permalink

    Alias for cogroup.

  82. def groupWith[W](other: JavaPairRDD[K, W]): JavaPairRDD[K, (Iterable[V], Iterable[W])]

    Permalink

    Alias for cogroup.

  83. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  84. def id: Int

    Permalink

    A unique ID for this RDD (within its SparkContext).

    A unique ID for this RDD (within its SparkContext).

    Definition Classes
    JavaRDDLike
  85. def intersection(other: JavaPairRDD[K, V]): JavaPairRDD[K, V]

    Permalink

    Return the intersection of this RDD and another one.

    Return the intersection of this RDD and another one. The output will not contain any duplicate elements, even if the input RDDs did.

    Note

    This method performs a shuffle internally.

  86. def isCheckpointed: Boolean

    Permalink

    Return whether this RDD has been checkpointed or not

    Return whether this RDD has been checkpointed or not

    Definition Classes
    JavaRDDLike
  87. def isEmpty(): Boolean

    Permalink

    returns

    true if and only if the RDD contains no elements at all. Note that an RDD may be empty even when it has at least 1 partition.

    Definition Classes
    JavaRDDLike
  88. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  89. def iterator(split: Partition, taskContext: TaskContext): Iterator[(K, V)]

    Permalink

    Internal method to this RDD; will read from cache if applicable, or otherwise compute it.

    Internal method to this RDD; will read from cache if applicable, or otherwise compute it. This should not be called by users directly, but is available for implementors of custom subclasses of RDD.

    Definition Classes
    JavaRDDLike
  90. def join[W](other: JavaPairRDD[K, W], numPartitions: Int): JavaPairRDD[K, (V, W)]

    Permalink

    Return an RDD containing all pairs of elements with matching keys in this and other.

    Return an RDD containing all pairs of elements with matching keys in this and other. Each pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in this and (k, v2) is in other. Performs a hash join across the cluster.

  91. def join[W](other: JavaPairRDD[K, W]): JavaPairRDD[K, (V, W)]

    Permalink

    Return an RDD containing all pairs of elements with matching keys in this and other.

    Return an RDD containing all pairs of elements with matching keys in this and other. Each pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in this and (k, v2) is in other. Performs a hash join across the cluster.

  92. def join[W](other: JavaPairRDD[K, W], partitioner: Partitioner): JavaPairRDD[K, (V, W)]

    Permalink

    Return an RDD containing all pairs of elements with matching keys in this and other.

    Return an RDD containing all pairs of elements with matching keys in this and other. Each pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in this and (k, v2) is in other. Uses the given Partitioner to partition the output RDD.

  93. implicit val kClassTag: ClassTag[K]

    Permalink
  94. def keyBy[U](f: Function[(K, V), U]): JavaPairRDD[U, (K, V)]

    Permalink

    Creates tuples of the elements in this RDD by applying f.

    Creates tuples of the elements in this RDD by applying f.

    Definition Classes
    JavaRDDLike
  95. def keys(): JavaRDD[K]

    Permalink

    Return an RDD with the keys of each tuple.

  96. def leftOuterJoin[W](other: JavaPairRDD[K, W], numPartitions: Int): JavaPairRDD[K, (V, Optional[W])]

    Permalink

    Perform a left outer join of this and other.

    Perform a left outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (v, Some(w))) for w in other, or the pair (k, (v, None)) if no elements in other have key k. Hash-partitions the output into numPartitions partitions.

  97. def leftOuterJoin[W](other: JavaPairRDD[K, W]): JavaPairRDD[K, (V, Optional[W])]

    Permalink

    Perform a left outer join of this and other.

    Perform a left outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (v, Some(w))) for w in other, or the pair (k, (v, None)) if no elements in other have key k. Hash-partitions the output using the existing partitioner/parallelism level.

  98. def leftOuterJoin[W](other: JavaPairRDD[K, W], partitioner: Partitioner): JavaPairRDD[K, (V, Optional[W])]

    Permalink

    Perform a left outer join of this and other.

    Perform a left outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (v, Some(w))) for w in other, or the pair (k, (v, None)) if no elements in other have key k. Uses the given Partitioner to partition the output RDD.

  99. def lookup(key: K): List[V]

    Permalink

    Return the list of values in the RDD for key key.

    Return the list of values in the RDD for key key. This operation is done efficiently if the RDD has a known partitioner by only searching the partition that the key maps to.

  100. def map[R](f: Function[(K, V), R]): JavaRDD[R]

    Permalink

    Return a new RDD by applying a function to all elements of this RDD.

    Return a new RDD by applying a function to all elements of this RDD.

    Definition Classes
    JavaRDDLike
  101. def mapPartitions[U](f: FlatMapFunction[Iterator[(K, V)], U], preservesPartitioning: Boolean): JavaRDD[U]

    Permalink

    Return a new RDD by applying a function to each partition of this RDD.

    Return a new RDD by applying a function to each partition of this RDD.

    Definition Classes
    JavaRDDLike
  102. def mapPartitions[U](f: FlatMapFunction[Iterator[(K, V)], U]): JavaRDD[U]

    Permalink

    Return a new RDD by applying a function to each partition of this RDD.

    Return a new RDD by applying a function to each partition of this RDD.

    Definition Classes
    JavaRDDLike
  103. def mapPartitionsToDouble(f: DoubleFlatMapFunction[Iterator[(K, V)]], preservesPartitioning: Boolean): JavaDoubleRDD

    Permalink

    Return a new RDD by applying a function to each partition of this RDD.

    Return a new RDD by applying a function to each partition of this RDD.

    Definition Classes
    JavaRDDLike
  104. def mapPartitionsToDouble(f: DoubleFlatMapFunction[Iterator[(K, V)]]): JavaDoubleRDD

    Permalink

    Return a new RDD by applying a function to each partition of this RDD.

    Return a new RDD by applying a function to each partition of this RDD.

    Definition Classes
    JavaRDDLike
  105. def mapPartitionsToPair[K2, V2](f: PairFlatMapFunction[Iterator[(K, V)], K2, V2], preservesPartitioning: Boolean): JavaPairRDD[K2, V2]

    Permalink

    Return a new RDD by applying a function to each partition of this RDD.

    Return a new RDD by applying a function to each partition of this RDD.

    Definition Classes
    JavaRDDLike
  106. def mapPartitionsToPair[K2, V2](f: PairFlatMapFunction[Iterator[(K, V)], K2, V2]): JavaPairRDD[K2, V2]

    Permalink

    Return a new RDD by applying a function to each partition of this RDD.

    Return a new RDD by applying a function to each partition of this RDD.

    Definition Classes
    JavaRDDLike
  107. def mapPartitionsWithIndex[R](f: Function2[Integer, Iterator[(K, V)], Iterator[R]], preservesPartitioning: Boolean = false): JavaRDD[R]

    Permalink

    Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.

    Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.

    Definition Classes
    JavaRDDLike
  108. def mapToDouble[R](f: DoubleFunction[(K, V)]): JavaDoubleRDD

    Permalink

    Return a new RDD by applying a function to all elements of this RDD.

    Return a new RDD by applying a function to all elements of this RDD.

    Definition Classes
    JavaRDDLike
  109. def mapToPair[K2, V2](f: PairFunction[(K, V), K2, V2]): JavaPairRDD[K2, V2]

    Permalink

    Return a new RDD by applying a function to all elements of this RDD.

    Return a new RDD by applying a function to all elements of this RDD.

    Definition Classes
    JavaRDDLike
  110. def mapValues[U](f: Function[V, U]): JavaPairRDD[K, U]

    Permalink

    Pass each value in the key-value pair RDD through a map function without changing the keys; this also retains the original RDD's partitioning.

  111. def max(comp: Comparator[(K, V)]): (K, V)

    Permalink

    Returns the maximum element from this RDD as defined by the specified Comparator[T].

    Returns the maximum element from this RDD as defined by the specified Comparator[T].

    comp

    the comparator that defines ordering

    returns

    the maximum of the RDD

    Definition Classes
    JavaRDDLike
  112. def min(comp: Comparator[(K, V)]): (K, V)

    Permalink

    Returns the minimum element from this RDD as defined by the specified Comparator[T].

    Returns the minimum element from this RDD as defined by the specified Comparator[T].

    comp

    the comparator that defines ordering

    returns

    the minimum of the RDD

    Definition Classes
    JavaRDDLike
  113. def name(): String

    Permalink
    Definition Classes
    JavaRDDLike
  114. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  115. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  116. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  117. def partitionBy(partitioner: Partitioner): JavaPairRDD[K, V]

    Permalink

    Return a copy of the RDD partitioned using the specified partitioner.

  118. def partitioner: Optional[Partitioner]

    Permalink

    The partitioner of this RDD.

    The partitioner of this RDD.

    Definition Classes
    JavaRDDLike
  119. def partitions: List[Partition]

    Permalink

    Set of partitions in this RDD.

    Set of partitions in this RDD.

    Definition Classes
    JavaRDDLike
  120. def persist(newLevel: StorageLevel): JavaPairRDD[K, V]

    Permalink

    Set this RDD's storage level to persist its values across operations after the first time it is computed.

    Set this RDD's storage level to persist its values across operations after the first time it is computed. Can only be called once on each RDD.

  121. def pipe(command: List[String], env: Map[String, String], separateWorkingDir: Boolean, bufferSize: Int, encoding: String): JavaRDD[String]

    Permalink

    Return an RDD created by piping elements to a forked external process.

    Return an RDD created by piping elements to a forked external process.

    Definition Classes
    JavaRDDLike
  122. def pipe(command: List[String], env: Map[String, String], separateWorkingDir: Boolean, bufferSize: Int): JavaRDD[String]

    Permalink

    Return an RDD created by piping elements to a forked external process.

    Return an RDD created by piping elements to a forked external process.

    Definition Classes
    JavaRDDLike
  123. def pipe(command: List[String], env: Map[String, String]): JavaRDD[String]

    Permalink

    Return an RDD created by piping elements to a forked external process.

    Return an RDD created by piping elements to a forked external process.

    Definition Classes
    JavaRDDLike
  124. def pipe(command: List[String]): JavaRDD[String]

    Permalink

    Return an RDD created by piping elements to a forked external process.

    Return an RDD created by piping elements to a forked external process.

    Definition Classes
    JavaRDDLike
  125. def pipe(command: String): JavaRDD[String]

    Permalink

    Return an RDD created by piping elements to a forked external process.

    Return an RDD created by piping elements to a forked external process.

    Definition Classes
    JavaRDDLike
  126. val rdd: RDD[(K, V)]

    Permalink
    Definition Classes
    JavaPairRDDJavaRDDLike
  127. def reduce(f: Function2[(K, V), (K, V), (K, V)]): (K, V)

    Permalink

    Reduces the elements of this RDD using the specified commutative and associative binary operator.

    Reduces the elements of this RDD using the specified commutative and associative binary operator.

    Definition Classes
    JavaRDDLike
  128. def reduceByKey(func: Function2[V, V, V]): JavaPairRDD[K, V]

    Permalink

    Merge the values for each key using an associative and commutative reduce function.

    Merge the values for each key using an associative and commutative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce. Output will be hash-partitioned with the existing partitioner/ parallelism level.

  129. def reduceByKey(func: Function2[V, V, V], numPartitions: Int): JavaPairRDD[K, V]

    Permalink

    Merge the values for each key using an associative and commutative reduce function.

    Merge the values for each key using an associative and commutative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce. Output will be hash-partitioned with numPartitions partitions.

  130. def reduceByKey(partitioner: Partitioner, func: Function2[V, V, V]): JavaPairRDD[K, V]

    Permalink

    Merge the values for each key using an associative and commutative reduce function.

    Merge the values for each key using an associative and commutative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce.

  131. def reduceByKeyLocally(func: Function2[V, V, V]): Map[K, V]

    Permalink

    Merge the values for each key using an associative and commutative reduce function, but return the result immediately to the master as a Map.

    Merge the values for each key using an associative and commutative reduce function, but return the result immediately to the master as a Map. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce.

  132. def repartition(numPartitions: Int): JavaPairRDD[K, V]

    Permalink

    Return a new RDD that has exactly numPartitions partitions.

    Return a new RDD that has exactly numPartitions partitions.

    Can increase or decrease the level of parallelism in this RDD. Internally, this uses a shuffle to redistribute data.

    If you are decreasing the number of partitions in this RDD, consider using coalesce, which can avoid performing a shuffle.

  133. def repartitionAndSortWithinPartitions(partitioner: Partitioner, comp: Comparator[K]): JavaPairRDD[K, V]

    Permalink

    Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.

    Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.

    This is more efficient than calling repartition and then sorting within each partition because it can push the sorting down into the shuffle machinery.

  134. def repartitionAndSortWithinPartitions(partitioner: Partitioner): JavaPairRDD[K, V]

    Permalink

    Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.

    Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.

    This is more efficient than calling repartition and then sorting within each partition because it can push the sorting down into the shuffle machinery.

  135. def rightOuterJoin[W](other: JavaPairRDD[K, W], numPartitions: Int): JavaPairRDD[K, (Optional[V], W)]

    Permalink

    Perform a right outer join of this and other.

    Perform a right outer join of this and other. For each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), w)) for v in this, or the pair (k, (None, w)) if no elements in this have key k. Hash-partitions the resulting RDD into the given number of partitions.

  136. def rightOuterJoin[W](other: JavaPairRDD[K, W]): JavaPairRDD[K, (Optional[V], W)]

    Permalink

    Perform a right outer join of this and other.

    Perform a right outer join of this and other. For each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), w)) for v in this, or the pair (k, (None, w)) if no elements in this have key k. Hash-partitions the resulting RDD using the existing partitioner/parallelism level.

  137. def rightOuterJoin[W](other: JavaPairRDD[K, W], partitioner: Partitioner): JavaPairRDD[K, (Optional[V], W)]

    Permalink

    Perform a right outer join of this and other.

    Perform a right outer join of this and other. For each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), w)) for v in this, or the pair (k, (None, w)) if no elements in this have key k. Uses the given Partitioner to partition the output RDD.

  138. def sample(withReplacement: Boolean, fraction: Double, seed: Long): JavaPairRDD[K, V]

    Permalink

    Return a sampled subset of this RDD.

  139. def sample(withReplacement: Boolean, fraction: Double): JavaPairRDD[K, V]

    Permalink

    Return a sampled subset of this RDD.

  140. def sampleByKey(withReplacement: Boolean, fractions: Map[K, Double]): JavaPairRDD[K, V]

    Permalink

    Return a subset of this RDD sampled by key (via stratified sampling).

    Return a subset of this RDD sampled by key (via stratified sampling).

    Create a sample of this RDD using variable sampling rates for different keys as specified by fractions, a key to sampling rate map, via simple random sampling with one pass over the RDD, to produce a sample of size that's approximately equal to the sum of math.ceil(numItems * samplingRate) over all key values.

    Use Utils.random.nextLong as the default seed for the random number generator.

  141. def sampleByKey(withReplacement: Boolean, fractions: Map[K, Double], seed: Long): JavaPairRDD[K, V]

    Permalink

    Return a subset of this RDD sampled by key (via stratified sampling).

    Return a subset of this RDD sampled by key (via stratified sampling).

    Create a sample of this RDD using variable sampling rates for different keys as specified by fractions, a key to sampling rate map, via simple random sampling with one pass over the RDD, to produce a sample of size that's approximately equal to the sum of math.ceil(numItems * samplingRate) over all key values.

  142. def sampleByKeyExact(withReplacement: Boolean, fractions: Map[K, Double]): JavaPairRDD[K, V]

    Permalink

    Return a subset of this RDD sampled by key (via stratified sampling) containing exactly math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).

    Return a subset of this RDD sampled by key (via stratified sampling) containing exactly math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).

    This method differs from sampleByKey in that we make additional passes over the RDD to create a sample size that's exactly equal to the sum of math.ceil(numItems * samplingRate) over all key values with a 99.99% confidence. When sampling without replacement, we need one additional pass over the RDD to guarantee sample size; when sampling with replacement, we need two additional passes.

    Use Utils.random.nextLong as the default seed for the random number generator.

  143. def sampleByKeyExact(withReplacement: Boolean, fractions: Map[K, Double], seed: Long): JavaPairRDD[K, V]

    Permalink

    Return a subset of this RDD sampled by key (via stratified sampling) containing exactly math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).

    Return a subset of this RDD sampled by key (via stratified sampling) containing exactly math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).

    This method differs from sampleByKey in that we make additional passes over the RDD to create a sample size that's exactly equal to the sum of math.ceil(numItems * samplingRate) over all key values with a 99.99% confidence. When sampling without replacement, we need one additional pass over the RDD to guarantee sample size; when sampling with replacement, we need two additional passes.

  144. def saveAsHadoopDataset(conf: JobConf): Unit

    Permalink

    Output the RDD to any Hadoop-supported storage system, using a Hadoop JobConf object for that storage system.

    Output the RDD to any Hadoop-supported storage system, using a Hadoop JobConf object for that storage system. The JobConf should set an OutputFormat and any output paths required (e.g. a table name to write to) in the same way as it would be configured for a Hadoop MapReduce job.

  145. def saveAsHadoopFile[F <: OutputFormat[_, _]](path: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[F], codec: Class[_ <: CompressionCodec]): Unit

    Permalink

    Output the RDD to any Hadoop-supported file system, compressing with the supplied codec.

  146. def saveAsHadoopFile[F <: OutputFormat[_, _]](path: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[F]): Unit

    Permalink

    Output the RDD to any Hadoop-supported file system.

  147. def saveAsHadoopFile[F <: OutputFormat[_, _]](path: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[F], conf: JobConf): Unit

    Permalink

    Output the RDD to any Hadoop-supported file system.

  148. def saveAsNewAPIHadoopDataset(conf: Configuration): Unit

    Permalink

    Output the RDD to any Hadoop-supported storage system, using a Configuration object for that storage system.

  149. def saveAsNewAPIHadoopFile[F <: OutputFormat[_, _]](path: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[F]): Unit

    Permalink

    Output the RDD to any Hadoop-supported file system.

  150. def saveAsNewAPIHadoopFile[F <: OutputFormat[_, _]](path: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[F], conf: Configuration): Unit

    Permalink

    Output the RDD to any Hadoop-supported file system.

  151. def saveAsObjectFile(path: String): Unit

    Permalink

    Save this RDD as a SequenceFile of serialized objects.

    Save this RDD as a SequenceFile of serialized objects.

    Definition Classes
    JavaRDDLike
  152. def saveAsTextFile(path: String, codec: Class[_ <: CompressionCodec]): Unit

    Permalink

    Save this RDD as a compressed text file, using string representations of elements.

    Save this RDD as a compressed text file, using string representations of elements.

    Definition Classes
    JavaRDDLike
  153. def saveAsTextFile(path: String): Unit

    Permalink

    Save this RDD as a text file, using string representations of elements.

    Save this RDD as a text file, using string representations of elements.

    Definition Classes
    JavaRDDLike
  154. def setName(name: String): JavaPairRDD[K, V]

    Permalink

    Assign a name to this RDD

  155. def sortByKey(comp: Comparator[K], ascending: Boolean, numPartitions: Int): JavaPairRDD[K, V]

    Permalink

    Sort the RDD by key, so that each partition contains a sorted range of the elements.

    Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling collect or save on the resulting RDD will return or output an ordered list of records (in the save case, they will be written to multiple part-X files in the filesystem, in order of the keys).

  156. def sortByKey(comp: Comparator[K], ascending: Boolean): JavaPairRDD[K, V]

    Permalink

    Sort the RDD by key, so that each partition contains a sorted range of the elements.

    Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling collect or save on the resulting RDD will return or output an ordered list of records (in the save case, they will be written to multiple part-X files in the filesystem, in order of the keys).

  157. def sortByKey(comp: Comparator[K]): JavaPairRDD[K, V]

    Permalink

    Sort the RDD by key, so that each partition contains a sorted range of the elements.

    Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling collect or save on the resulting RDD will return or output an ordered list of records (in the save case, they will be written to multiple part-X files in the filesystem, in order of the keys).

  158. def sortByKey(ascending: Boolean, numPartitions: Int): JavaPairRDD[K, V]

    Permalink

    Sort the RDD by key, so that each partition contains a sorted range of the elements.

    Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling collect or save on the resulting RDD will return or output an ordered list of records (in the save case, they will be written to multiple part-X files in the filesystem, in order of the keys).

  159. def sortByKey(ascending: Boolean): JavaPairRDD[K, V]

    Permalink

    Sort the RDD by key, so that each partition contains a sorted range of the elements.

    Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling collect or save on the resulting RDD will return or output an ordered list of records (in the save case, they will be written to multiple part-X files in the filesystem, in order of the keys).

  160. def sortByKey(): JavaPairRDD[K, V]

    Permalink

    Sort the RDD by key, so that each partition contains a sorted range of the elements in ascending order.

    Sort the RDD by key, so that each partition contains a sorted range of the elements in ascending order. Calling collect or save on the resulting RDD will return or output an ordered list of records (in the save case, they will be written to multiple part-X files in the filesystem, in order of the keys).

  161. def subtract(other: JavaPairRDD[K, V], p: Partitioner): JavaPairRDD[K, V]

    Permalink

    Return an RDD with the elements from this that are not in other.

  162. def subtract(other: JavaPairRDD[K, V], numPartitions: Int): JavaPairRDD[K, V]

    Permalink

    Return an RDD with the elements from this that are not in other.

  163. def subtract(other: JavaPairRDD[K, V]): JavaPairRDD[K, V]

    Permalink

    Return an RDD with the elements from this that are not in other.

    Return an RDD with the elements from this that are not in other.

    Uses this partitioner/partition size, because even if other is huge, the resulting RDD will be <= us.

  164. def subtractByKey[W](other: JavaPairRDD[K, W], p: Partitioner): JavaPairRDD[K, V]

    Permalink

    Return an RDD with the pairs from this whose keys are not in other.

  165. def subtractByKey[W](other: JavaPairRDD[K, W], numPartitions: Int): JavaPairRDD[K, V]

    Permalink

    Return an RDD with the pairs from this whose keys are not in other.

  166. def subtractByKey[W](other: JavaPairRDD[K, W]): JavaPairRDD[K, V]

    Permalink

    Return an RDD with the pairs from this whose keys are not in other.

    Return an RDD with the pairs from this whose keys are not in other.

    Uses this partitioner/partition size, because even if other is huge, the resulting RDD will be <= us.

  167. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  168. def take(num: Int): List[(K, V)]

    Permalink

    Take the first num elements of the RDD.

    Take the first num elements of the RDD. This currently scans the partitions *one by one*, so it will be slow if a lot of partitions are required. In that case, use collect() to get the whole RDD instead.

    Definition Classes
    JavaRDDLike
    Note

    this method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.

  169. def takeAsync(num: Int): JavaFutureAction[List[(K, V)]]

    Permalink

    The asynchronous version of the take action, which returns a future for retrieving the first num elements of this RDD.

    The asynchronous version of the take action, which returns a future for retrieving the first num elements of this RDD.

    Definition Classes
    JavaRDDLike
    Note

    this method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.

  170. def takeOrdered(num: Int): List[(K, V)]

    Permalink

    Returns the first k (smallest) elements from this RDD using the natural ordering for T while maintain the order.

    Returns the first k (smallest) elements from this RDD using the natural ordering for T while maintain the order.

    num

    k, the number of top elements to return

    returns

    an array of top elements

    Definition Classes
    JavaRDDLike
    Note

    this method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.

  171. def takeOrdered(num: Int, comp: Comparator[(K, V)]): List[(K, V)]

    Permalink

    Returns the first k (smallest) elements from this RDD as defined by the specified Comparator[T] and maintains the order.

    Returns the first k (smallest) elements from this RDD as defined by the specified Comparator[T] and maintains the order.

    num

    k, the number of elements to return

    comp

    the comparator that defines the order

    returns

    an array of top elements

    Definition Classes
    JavaRDDLike
    Note

    this method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.

  172. def takeSample(withReplacement: Boolean, num: Int, seed: Long): List[(K, V)]

    Permalink
    Definition Classes
    JavaRDDLike
  173. def takeSample(withReplacement: Boolean, num: Int): List[(K, V)]

    Permalink
    Definition Classes
    JavaRDDLike
  174. def toDebugString(): String

    Permalink

    A description of this RDD and its recursive dependencies for debugging.

    A description of this RDD and its recursive dependencies for debugging.

    Definition Classes
    JavaRDDLike
  175. def toLocalIterator(): Iterator[(K, V)]

    Permalink

    Return an iterator that contains all of the elements in this RDD.

    Return an iterator that contains all of the elements in this RDD.

    The iterator will consume as much memory as the largest partition in this RDD.

    Definition Classes
    JavaRDDLike
  176. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  177. def top(num: Int): List[(K, V)]

    Permalink

    Returns the top k (largest) elements from this RDD using the natural ordering for T and maintains the order.

    Returns the top k (largest) elements from this RDD using the natural ordering for T and maintains the order.

    num

    k, the number of top elements to return

    returns

    an array of top elements

    Definition Classes
    JavaRDDLike
    Note

    this method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.

  178. def top(num: Int, comp: Comparator[(K, V)]): List[(K, V)]

    Permalink

    Returns the top k (largest) elements from this RDD as defined by the specified Comparator[T] and maintains the order.

    Returns the top k (largest) elements from this RDD as defined by the specified Comparator[T] and maintains the order.

    num

    k, the number of top elements to return

    comp

    the comparator that defines the order

    returns

    an array of top elements

    Definition Classes
    JavaRDDLike
    Note

    this method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.

  179. def treeAggregate[U](zeroValue: U, seqOp: Function2[U, (K, V), U], combOp: Function2[U, U, U]): U

    Permalink

    org.apache.spark.api.java.JavaRDDLike#treeAggregate with suggested depth 2.

    Definition Classes
    JavaRDDLike
  180. def treeAggregate[U](zeroValue: U, seqOp: Function2[U, (K, V), U], combOp: Function2[U, U, U], depth: Int): U

    Permalink

    Aggregates the elements of this RDD in a multi-level tree pattern.

    Aggregates the elements of this RDD in a multi-level tree pattern.

    depth

    suggested depth of the tree

    Definition Classes
    JavaRDDLike
    See also

    org.apache.spark.api.java.JavaRDDLike#aggregate

  181. def treeReduce(f: Function2[(K, V), (K, V), (K, V)]): (K, V)

    Permalink

    org.apache.spark.api.java.JavaRDDLike#treeReduce with suggested depth 2.

    Definition Classes
    JavaRDDLike
  182. def treeReduce(f: Function2[(K, V), (K, V), (K, V)], depth: Int): (K, V)

    Permalink

    Reduces the elements of this RDD in a multi-level tree pattern.

    Reduces the elements of this RDD in a multi-level tree pattern.

    depth

    suggested depth of the tree

    Definition Classes
    JavaRDDLike
    See also

    org.apache.spark.api.java.JavaRDDLike#reduce

  183. def union(other: JavaPairRDD[K, V]): JavaPairRDD[K, V]

    Permalink

    Return the union of this RDD and another one.

    Return the union of this RDD and another one. Any identical elements will appear multiple times (use .distinct() to eliminate them).

  184. def unpersist(blocking: Boolean): JavaPairRDD[K, V]

    Permalink

    Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.

    Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.

    blocking

    Whether to block until all blocks are deleted.

  185. def unpersist(): JavaPairRDD[K, V]

    Permalink

    Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.

    Mark the RDD as non-persistent, and remove all blocks for it from memory and disk. This method blocks until all blocks are deleted.

  186. implicit val vClassTag: ClassTag[V]

    Permalink
  187. def values(): JavaRDD[V]

    Permalink

    Return an RDD with the values of each tuple.

  188. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  189. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  190. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  191. def wrapRDD(rdd: RDD[(K, V)]): JavaPairRDD[K, V]

    Permalink
    Definition Classes
    JavaPairRDDJavaRDDLike
  192. def zip[U](other: JavaRDDLike[U, _]): JavaPairRDD[(K, V), U]

    Permalink

    Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc.

    Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc. Assumes that the two RDDs have the *same number of partitions* and the *same number of elements in each partition* (e.g. one was made through a map on the other).

    Definition Classes
    JavaRDDLike
  193. def zipPartitions[U, V](other: JavaRDDLike[U, _], f: FlatMapFunction2[Iterator[(K, V)], Iterator[U], V]): JavaRDD[V]

    Permalink

    Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD by applying a function to the zipped partitions.

    Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD by applying a function to the zipped partitions. Assumes that all the RDDs have the *same number of partitions*, but does *not* require them to have the same number of elements in each partition.

    Definition Classes
    JavaRDDLike
  194. def zipWithIndex(): JavaPairRDD[(K, V), Long]

    Permalink

    Zips this RDD with its element indices.

    Zips this RDD with its element indices. The ordering is first based on the partition index and then the ordering of items within each partition. So the first item in the first partition gets index 0, and the last item in the last partition receives the largest index. This is similar to Scala's zipWithIndex but it uses Long instead of Int as the index type. This method needs to trigger a spark job when this RDD contains more than one partitions.

    Definition Classes
    JavaRDDLike
  195. def zipWithUniqueId(): JavaPairRDD[(K, V), Long]

    Permalink

    Zips this RDD with generated unique Long ids.

    Zips this RDD with generated unique Long ids. Items in the kth partition will get ids k, n+k, 2*n+k, ..., where n is the number of partitions. So there may exist gaps, but this method won't trigger a spark job, which is different from org.apache.spark.rdd.RDD#zipWithIndex.

    Definition Classes
    JavaRDDLike

Inherited from AbstractJavaRDDLike[(K, V), JavaPairRDD[K, V]]

Inherited from JavaRDDLike[(K, V), JavaPairRDD[K, V]]

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped