org.apache.spark.api.java
Class JavaPairRDD<K,V>

Object
  extended by org.apache.spark.api.java.JavaPairRDD<K,V>
All Implemented Interfaces:
java.io.Serializable, JavaRDDLike<scala.Tuple2<K,V>,JavaPairRDD<K,V>>
Direct Known Subclasses:
JavaHadoopRDD, JavaNewHadoopRDD

public class JavaPairRDD<K,V>
extends Object

See Also:
Serialized Form

Constructor Summary
JavaPairRDD(RDD<scala.Tuple2<K,V>> rdd, scala.reflect.ClassTag<K> kClassTag, scala.reflect.ClassTag<V> vClassTag)
           
 
Method Summary
<U> JavaPairRDD<K,U>
aggregateByKey(U zeroValue, Function2<U,V,U> seqFunc, Function2<U,U,U> combFunc)
          Aggregate the values of each key, using given combine functions and a neutral "zero value".
<U> JavaPairRDD<K,U>
aggregateByKey(U zeroValue, int numPartitions, Function2<U,V,U> seqFunc, Function2<U,U,U> combFunc)
          Aggregate the values of each key, using given combine functions and a neutral "zero value".
<U> JavaPairRDD<K,U>
aggregateByKey(U zeroValue, Partitioner partitioner, Function2<U,V,U> seqFunc, Function2<U,U,U> combFunc)
          Aggregate the values of each key, using given combine functions and a neutral "zero value".
 JavaPairRDD<K,V> cache()
          Persist this RDD with the default storage level (`MEMORY_ONLY`).
 scala.reflect.ClassTag<scala.Tuple2<K,V>> classTag()
           
 JavaPairRDD<K,V> coalesce(int numPartitions)
          Return a new RDD that is reduced into numPartitions partitions.
 JavaPairRDD<K,V> coalesce(int numPartitions, boolean shuffle)
          Return a new RDD that is reduced into numPartitions partitions.
<W> JavaPairRDD<K,scala.Tuple2<Iterable<V>,Iterable<W>>>
cogroup(JavaPairRDD<K,W> other)
          For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
<W> JavaPairRDD<K,scala.Tuple2<Iterable<V>,Iterable<W>>>
cogroup(JavaPairRDD<K,W> other, int numPartitions)
          For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
<W> JavaPairRDD<K,scala.Tuple2<Iterable<V>,Iterable<W>>>
cogroup(JavaPairRDD<K,W> other, Partitioner partitioner)
          For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
<W1,W2> JavaPairRDD<K,scala.Tuple3<Iterable<V>,Iterable<W1>,Iterable<W2>>>
cogroup(JavaPairRDD<K,W1> other1, JavaPairRDD<K,W2> other2)
          For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
<W1,W2> JavaPairRDD<K,scala.Tuple3<Iterable<V>,Iterable<W1>,Iterable<W2>>>
cogroup(JavaPairRDD<K,W1> other1, JavaPairRDD<K,W2> other2, int numPartitions)
          For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
<W1,W2,W3> JavaPairRDD<K,scala.Tuple4<Iterable<V>,Iterable<W1>,Iterable<W2>,Iterable<W3>>>
cogroup(JavaPairRDD<K,W1> other1, JavaPairRDD<K,W2> other2, JavaPairRDD<K,W3> other3)
          For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
<W1,W2,W3> JavaPairRDD<K,scala.Tuple4<Iterable<V>,Iterable<W1>,Iterable<W2>,Iterable<W3>>>
cogroup(JavaPairRDD<K,W1> other1, JavaPairRDD<K,W2> other2, JavaPairRDD<K,W3> other3, int numPartitions)
          For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
<W1,W2,W3> JavaPairRDD<K,scala.Tuple4<Iterable<V>,Iterable<W1>,Iterable<W2>,Iterable<W3>>>
cogroup(JavaPairRDD<K,W1> other1, JavaPairRDD<K,W2> other2, JavaPairRDD<K,W3> other3, Partitioner partitioner)
          For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
<W1,W2> JavaPairRDD<K,scala.Tuple3<Iterable<V>,Iterable<W1>,Iterable<W2>>>
cogroup(JavaPairRDD<K,W1> other1, JavaPairRDD<K,W2> other2, Partitioner partitioner)
          For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
 java.util.Map<K,V> collectAsMap()
          Return the key-value pairs in this RDD to the master as a Map.
<C> JavaPairRDD<K,C>
combineByKey(Function<V,C> createCombiner, Function2<C,V,C> mergeValue, Function2<C,C,C> mergeCombiners)
          Simplified version of combineByKey that hash-partitions the resulting RDD using the existing partitioner/parallelism level and using map-side aggregation.
<C> JavaPairRDD<K,C>
combineByKey(Function<V,C> createCombiner, Function2<C,V,C> mergeValue, Function2<C,C,C> mergeCombiners, int numPartitions)
          Simplified version of combineByKey that hash-partitions the output RDD and uses map-side aggregation.
<C> JavaPairRDD<K,C>
combineByKey(Function<V,C> createCombiner, Function2<C,V,C> mergeValue, Function2<C,C,C> mergeCombiners, Partitioner partitioner)
          Generic function to combine the elements for each key using a custom set of aggregation functions.
<C> JavaPairRDD<K,C>
combineByKey(Function<V,C> createCombiner, Function2<C,V,C> mergeValue, Function2<C,C,C> mergeCombiners, Partitioner partitioner, boolean mapSideCombine, Serializer serializer)
          Generic function to combine the elements for each key using a custom set of aggregation functions.
 JavaPairRDD<K,Object> countApproxDistinctByKey(double relativeSD)
          Return approximate number of distinct values for each key in this RDD.
 JavaPairRDD<K,Object> countApproxDistinctByKey(double relativeSD, int numPartitions)
          Return approximate number of distinct values for each key in this RDD.
 JavaPairRDD<K,Object> countApproxDistinctByKey(double relativeSD, Partitioner partitioner)
          Return approximate number of distinct values for each key in this RDD.
 java.util.Map<K,Object> countByKey()
          Count the number of elements for each key, and return the result to the master as a Map.
 PartialResult<java.util.Map<K,BoundedDouble>> countByKeyApprox(long timeout)
          :: Experimental :: Approximate version of countByKey that can return a partial result if it does not finish within a timeout.
 PartialResult<java.util.Map<K,BoundedDouble>> countByKeyApprox(long timeout, double confidence)
          :: Experimental :: Approximate version of countByKey that can return a partial result if it does not finish within a timeout.
 JavaPairRDD<K,V> distinct()
          Return a new RDD containing the distinct elements in this RDD.
 JavaPairRDD<K,V> distinct(int numPartitions)
          Return a new RDD containing the distinct elements in this RDD.
 JavaPairRDD<K,V> filter(Function<scala.Tuple2<K,V>,Boolean> f)
          Return a new RDD containing only the elements that satisfy a predicate.
 scala.Tuple2<K,V> first()
          Return the first element in this RDD.
<U> JavaPairRDD<K,U>
flatMapValues(Function<V,Iterable<U>> f)
          Pass each value in the key-value pair RDD through a flatMap function without changing the keys; this also retains the original RDD's partitioning.
 JavaPairRDD<K,V> foldByKey(V zeroValue, Function2<V,V,V> func)
          Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
 JavaPairRDD<K,V> foldByKey(V zeroValue, int numPartitions, Function2<V,V,V> func)
          Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
 JavaPairRDD<K,V> foldByKey(V zeroValue, Partitioner partitioner, Function2<V,V,V> func)
          Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
static
<K,V> JavaPairRDD<K,V>
fromJavaRDD(JavaRDD<scala.Tuple2<K,V>> rdd)
          Convert a JavaRDD of key-value pairs to JavaPairRDD.
static
<K,V> JavaPairRDD<K,V>
fromRDD(RDD<scala.Tuple2<K,V>> rdd, scala.reflect.ClassTag<K> evidence$5, scala.reflect.ClassTag<V> evidence$6)
           
<W> JavaPairRDD<K,scala.Tuple2<com.google.common.base.Optional<V>,com.google.common.base.Optional<W>>>
fullOuterJoin(JavaPairRDD<K,W> other)
          Perform a full outer join of this and other.
<W> JavaPairRDD<K,scala.Tuple2<com.google.common.base.Optional<V>,com.google.common.base.Optional<W>>>
fullOuterJoin(JavaPairRDD<K,W> other, int numPartitions)
          Perform a full outer join of this and other.
<W> JavaPairRDD<K,scala.Tuple2<com.google.common.base.Optional<V>,com.google.common.base.Optional<W>>>
fullOuterJoin(JavaPairRDD<K,W> other, Partitioner partitioner)
          Perform a full outer join of this and other.
Methods inherited from class Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 
Methods inherited from interface org.apache.spark.api.java.JavaRDDLike
aggregate, cartesian, checkpoint, collect, collectAsync, collectPartitions, context, count, countApprox, countApprox, countApproxDistinct, countAsync, countByValue, countByValueApprox, countByValueApprox, flatMap, flatMapToDouble, flatMapToPair, fold, foreach, foreachAsync, foreachPartition, foreachPartitionAsync, getCheckpointFile, getStorageLevel, glom, groupBy, groupBy, id, isCheckpointed, isEmpty, iterator, keyBy, map, mapPartitions, mapPartitions, mapPartitionsToDouble, mapPartitionsToDouble, mapPartitionsToPair, mapPartitionsToPair, mapPartitionsWithIndex, mapToDouble, mapToPair, max, min, name, partitions, pipe, pipe, pipe, reduce, saveAsObjectFile, saveAsTextFile, saveAsTextFile, splits, take, takeAsync, takeOrdered, takeOrdered, takeSample, takeSample, toArray, toDebugString, toLocalIterator, top, top, treeAggregate, treeAggregate, treeReduce, treeReduce, zip, zipPartitions, zipWithIndex, zipWithUniqueId
 

Constructor Detail

JavaPairRDD

public JavaPairRDD(RDD<scala.Tuple2<K,V>> rdd,
                   scala.reflect.ClassTag<K> kClassTag,
                   scala.reflect.ClassTag<V> vClassTag)
Method Detail

fromRDD

public static <K,V> JavaPairRDD<K,V> fromRDD(RDD<scala.Tuple2<K,V>> rdd,
                                             scala.reflect.ClassTag<K> evidence$5,
                                             scala.reflect.ClassTag<V> evidence$6)

toRDD

public static <K,V> RDD<scala.Tuple2<K,V>> toRDD(JavaPairRDD<K,V> rdd)

fromJavaRDD

public static <K,V> JavaPairRDD<K,V> fromJavaRDD(JavaRDD<scala.Tuple2<K,V>> rdd)
Convert a JavaRDD of key-value pairs to JavaPairRDD.


rdd

public RDD<scala.Tuple2<K,V>> rdd()

kClassTag

public scala.reflect.ClassTag<K> kClassTag()

vClassTag

public scala.reflect.ClassTag<V> vClassTag()

wrapRDD

public JavaPairRDD<K,V> wrapRDD(RDD<scala.Tuple2<K,V>> rdd)

classTag

public scala.reflect.ClassTag<scala.Tuple2<K,V>> classTag()

cache

public JavaPairRDD<K,V> cache()
Persist this RDD with the default storage level (`MEMORY_ONLY`).


persist

public JavaPairRDD<K,V> persist(StorageLevel newLevel)
Set this RDD's storage level to persist its values across operations after the first time it is computed. Can only be called once on each RDD.

Parameters:
newLevel - (undocumented)
Returns:
(undocumented)

unpersist

public JavaPairRDD<K,V> unpersist()
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk. This method blocks until all blocks are deleted.

Returns:
(undocumented)

unpersist

public JavaPairRDD<K,V> unpersist(boolean blocking)
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.

Parameters:
blocking - Whether to block until all blocks are deleted.
Returns:
(undocumented)

distinct

public JavaPairRDD<K,V> distinct()
Return a new RDD containing the distinct elements in this RDD.

Returns:
(undocumented)

distinct

public JavaPairRDD<K,V> distinct(int numPartitions)
Return a new RDD containing the distinct elements in this RDD.

Parameters:
numPartitions - (undocumented)
Returns:
(undocumented)

filter

public JavaPairRDD<K,V> filter(Function<scala.Tuple2<K,V>,Boolean> f)
Return a new RDD containing only the elements that satisfy a predicate.

Parameters:
f - (undocumented)
Returns:
(undocumented)

coalesce

public JavaPairRDD<K,V> coalesce(int numPartitions)
Return a new RDD that is reduced into numPartitions partitions.

Parameters:
numPartitions - (undocumented)
Returns:
(undocumented)

coalesce

public JavaPairRDD<K,V> coalesce(int numPartitions,
                                 boolean shuffle)
Return a new RDD that is reduced into numPartitions partitions.

Parameters:
numPartitions - (undocumented)
shuffle - (undocumented)
Returns:
(undocumented)

repartition

public JavaPairRDD<K,V> repartition(int numPartitions)
Return a new RDD that has exactly numPartitions partitions.

Can increase or decrease the level of parallelism in this RDD. Internally, this uses a shuffle to redistribute data.

If you are decreasing the number of partitions in this RDD, consider using coalesce, which can avoid performing a shuffle.

Parameters:
numPartitions - (undocumented)
Returns:
(undocumented)

sample

public JavaPairRDD<K,V> sample(boolean withReplacement,
                               double fraction)
Return a sampled subset of this RDD.

Parameters:
withReplacement - (undocumented)
fraction - (undocumented)
Returns:
(undocumented)

sample

public JavaPairRDD<K,V> sample(boolean withReplacement,
                               double fraction,
                               long seed)
Return a sampled subset of this RDD.

Parameters:
withReplacement - (undocumented)
fraction - (undocumented)
seed - (undocumented)
Returns:
(undocumented)

sampleByKey

public JavaPairRDD<K,V> sampleByKey(boolean withReplacement,
                                    java.util.Map<K,Object> fractions,
                                    long seed)
Return a subset of this RDD sampled by key (via stratified sampling).

Create a sample of this RDD using variable sampling rates for different keys as specified by fractions, a key to sampling rate map, via simple random sampling with one pass over the RDD, to produce a sample of size that's approximately equal to the sum of math.ceil(numItems * samplingRate) over all key values.

Parameters:
withReplacement - (undocumented)
fractions - (undocumented)
seed - (undocumented)
Returns:
(undocumented)

sampleByKey

public JavaPairRDD<K,V> sampleByKey(boolean withReplacement,
                                    java.util.Map<K,Object> fractions)
Return a subset of this RDD sampled by key (via stratified sampling).

Create a sample of this RDD using variable sampling rates for different keys as specified by fractions, a key to sampling rate map, via simple random sampling with one pass over the RDD, to produce a sample of size that's approximately equal to the sum of math.ceil(numItems * samplingRate) over all key values.

Use Utils.random.nextLong as the default seed for the random number generator.

Parameters:
withReplacement - (undocumented)
fractions - (undocumented)
Returns:
(undocumented)

sampleByKeyExact

public JavaPairRDD<K,V> sampleByKeyExact(boolean withReplacement,
                                         java.util.Map<K,Object> fractions,
                                         long seed)

sampleByKeyExact

public JavaPairRDD<K,V> sampleByKeyExact(boolean withReplacement,
                                         java.util.Map<K,Object> fractions)

union

public JavaPairRDD<K,V> union(JavaPairRDD<K,V> other)
Return the union of this RDD and another one. Any identical elements will appear multiple times (use .distinct() to eliminate them).

Parameters:
other - (undocumented)
Returns:
(undocumented)

intersection

public JavaPairRDD<K,V> intersection(JavaPairRDD<K,V> other)
Return the intersection of this RDD and another one. The output will not contain any duplicate elements, even if the input RDDs did.

Note that this method performs a shuffle internally.

Parameters:
other - (undocumented)
Returns:
(undocumented)

first

public scala.Tuple2<K,V> first()
Description copied from interface: JavaRDDLike
Return the first element in this RDD.

Returns:
(undocumented)

combineByKey

public <C> JavaPairRDD<K,C> combineByKey(Function<V,C> createCombiner,
                                         Function2<C,V,C> mergeValue,
                                         Function2<C,C,C> mergeCombiners,
                                         Partitioner partitioner,
                                         boolean mapSideCombine,
                                         Serializer serializer)
Generic function to combine the elements for each key using a custom set of aggregation functions. Turns a JavaPairRDD[(K, V)] into a result of type JavaPairRDD[(K, C)], for a "combined type" C * Note that V and C can be different -- for example, one might group an RDD of type (Int, Int) into an RDD of type (Int, List[Int]). Users provide three functions:

- createCombiner, which turns a V into a C (e.g., creates a one-element list) - mergeValue, to merge a V into a C (e.g., adds it to the end of a list) - mergeCombiners, to combine two C's into a single one.

In addition, users can control the partitioning of the output RDD, the serializer that is use for the shuffle, and whether to perform map-side aggregation (if a mapper can produce multiple items with the same key).

Parameters:
createCombiner - (undocumented)
mergeValue - (undocumented)
mergeCombiners - (undocumented)
partitioner - (undocumented)
mapSideCombine - (undocumented)
serializer - (undocumented)
Returns:
(undocumented)

combineByKey

public <C> JavaPairRDD<K,C> combineByKey(Function<V,C> createCombiner,
                                         Function2<C,V,C> mergeValue,
                                         Function2<C,C,C> mergeCombiners,
                                         Partitioner partitioner)
Generic function to combine the elements for each key using a custom set of aggregation functions. Turns a JavaPairRDD[(K, V)] into a result of type JavaPairRDD[(K, C)], for a "combined type" C * Note that V and C can be different -- for example, one might group an RDD of type (Int, Int) into an RDD of type (Int, List[Int]). Users provide three functions:

- createCombiner, which turns a V into a C (e.g., creates a one-element list) - mergeValue, to merge a V into a C (e.g., adds it to the end of a list) - mergeCombiners, to combine two C's into a single one.

In addition, users can control the partitioning of the output RDD. This method automatically uses map-side aggregation in shuffling the RDD.

Parameters:
createCombiner - (undocumented)
mergeValue - (undocumented)
mergeCombiners - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

combineByKey

public <C> JavaPairRDD<K,C> combineByKey(Function<V,C> createCombiner,
                                         Function2<C,V,C> mergeValue,
                                         Function2<C,C,C> mergeCombiners,
                                         int numPartitions)
Simplified version of combineByKey that hash-partitions the output RDD and uses map-side aggregation.

Parameters:
createCombiner - (undocumented)
mergeValue - (undocumented)
mergeCombiners - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

reduceByKey

public JavaPairRDD<K,V> reduceByKey(Partitioner partitioner,
                                    Function2<V,V,V> func)
Merge the values for each key using an associative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce.

Parameters:
partitioner - (undocumented)
func - (undocumented)
Returns:
(undocumented)

reduceByKeyLocally

public java.util.Map<K,V> reduceByKeyLocally(Function2<V,V,V> func)
Merge the values for each key using an associative reduce function, but return the results immediately to the master as a Map. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce.

Parameters:
func - (undocumented)
Returns:
(undocumented)

countByKey

public java.util.Map<K,Object> countByKey()
Count the number of elements for each key, and return the result to the master as a Map.


countByKeyApprox

public PartialResult<java.util.Map<K,BoundedDouble>> countByKeyApprox(long timeout)
:: Experimental :: Approximate version of countByKey that can return a partial result if it does not finish within a timeout.

Parameters:
timeout - (undocumented)
Returns:
(undocumented)

countByKeyApprox

public PartialResult<java.util.Map<K,BoundedDouble>> countByKeyApprox(long timeout,
                                                                      double confidence)
:: Experimental :: Approximate version of countByKey that can return a partial result if it does not finish within a timeout.

Parameters:
timeout - (undocumented)
confidence - (undocumented)
Returns:
(undocumented)

aggregateByKey

public <U> JavaPairRDD<K,U> aggregateByKey(U zeroValue,
                                           Partitioner partitioner,
                                           Function2<U,V,U> seqFunc,
                                           Function2<U,U,U> combFunc)
Aggregate the values of each key, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of the values in this RDD, V. Thus, we need one operation for merging a V into a U and one operation for merging two U's, as in scala.TraversableOnce. The former operation is used for merging values within a partition, and the latter is used for merging values between partitions. To avoid memory allocation, both of these functions are allowed to modify and return their first argument instead of creating a new U.

Parameters:
zeroValue - (undocumented)
partitioner - (undocumented)
seqFunc - (undocumented)
combFunc - (undocumented)
Returns:
(undocumented)

aggregateByKey

public <U> JavaPairRDD<K,U> aggregateByKey(U zeroValue,
                                           int numPartitions,
                                           Function2<U,V,U> seqFunc,
                                           Function2<U,U,U> combFunc)
Aggregate the values of each key, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of the values in this RDD, V. Thus, we need one operation for merging a V into a U and one operation for merging two U's, as in scala.TraversableOnce. The former operation is used for merging values within a partition, and the latter is used for merging values between partitions. To avoid memory allocation, both of these functions are allowed to modify and return their first argument instead of creating a new U.

Parameters:
zeroValue - (undocumented)
numPartitions - (undocumented)
seqFunc - (undocumented)
combFunc - (undocumented)
Returns:
(undocumented)

aggregateByKey

public <U> JavaPairRDD<K,U> aggregateByKey(U zeroValue,
                                           Function2<U,V,U> seqFunc,
                                           Function2<U,U,U> combFunc)
Aggregate the values of each key, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of the values in this RDD, V. Thus, we need one operation for merging a V into a U and one operation for merging two U's. The former operation is used for merging values within a partition, and the latter is used for merging values between partitions. To avoid memory allocation, both of these functions are allowed to modify and return their first argument instead of creating a new U.

Parameters:
zeroValue - (undocumented)
seqFunc - (undocumented)
combFunc - (undocumented)
Returns:
(undocumented)

foldByKey

public JavaPairRDD<K,V> foldByKey(V zeroValue,
                                  Partitioner partitioner,
                                  Function2<V,V,V> func)
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).

Parameters:
zeroValue - (undocumented)
partitioner - (undocumented)
func - (undocumented)
Returns:
(undocumented)

foldByKey

public JavaPairRDD<K,V> foldByKey(V zeroValue,
                                  int numPartitions,
                                  Function2<V,V,V> func)
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).

Parameters:
zeroValue - (undocumented)
numPartitions - (undocumented)
func - (undocumented)
Returns:
(undocumented)

foldByKey

public JavaPairRDD<K,V> foldByKey(V zeroValue,
                                  Function2<V,V,V> func)
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).

Parameters:
zeroValue - (undocumented)
func - (undocumented)
Returns:
(undocumented)

reduceByKey

public JavaPairRDD<K,V> reduceByKey(Function2<V,V,V> func,
                                    int numPartitions)
Merge the values for each key using an associative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce. Output will be hash-partitioned with numPartitions partitions.

Parameters:
func - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

groupByKey

public JavaPairRDD<K,Iterable<V>> groupByKey(Partitioner partitioner)
Group the values for each key in the RDD into a single sequence. Allows controlling the partitioning of the resulting key-value pair RDD by passing a Partitioner.

Note: If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using JavaPairRDD.reduceByKey or JavaPairRDD.combineByKey will provide much better performance.

Parameters:
partitioner - (undocumented)
Returns:
(undocumented)

groupByKey

public JavaPairRDD<K,Iterable<V>> groupByKey(int numPartitions)
Group the values for each key in the RDD into a single sequence. Hash-partitions the resulting RDD with into numPartitions partitions.

Note: If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using JavaPairRDD.reduceByKey or JavaPairRDD.combineByKey will provide much better performance.

Parameters:
numPartitions - (undocumented)
Returns:
(undocumented)

subtract

public JavaPairRDD<K,V> subtract(JavaPairRDD<K,V> other)
Return an RDD with the elements from this that are not in other.

Uses this partitioner/partition size, because even if other is huge, the resulting RDD will be &lt;= us.

Parameters:
other - (undocumented)
Returns:
(undocumented)

subtract

public JavaPairRDD<K,V> subtract(JavaPairRDD<K,V> other,
                                 int numPartitions)
Return an RDD with the elements from this that are not in other.

Parameters:
other - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

subtract

public JavaPairRDD<K,V> subtract(JavaPairRDD<K,V> other,
                                 Partitioner p)
Return an RDD with the elements from this that are not in other.

Parameters:
other - (undocumented)
p - (undocumented)
Returns:
(undocumented)

subtractByKey

public <W> JavaPairRDD<K,V> subtractByKey(JavaPairRDD<K,W> other)
Return an RDD with the pairs from this whose keys are not in other.

Uses this partitioner/partition size, because even if other is huge, the resulting RDD will be &lt;= us.

Parameters:
other - (undocumented)
Returns:
(undocumented)

subtractByKey

public <W> JavaPairRDD<K,V> subtractByKey(JavaPairRDD<K,W> other,
                                          int numPartitions)
Return an RDD with the pairs from `this` whose keys are not in `other`.


subtractByKey

public <W> JavaPairRDD<K,V> subtractByKey(JavaPairRDD<K,W> other,
                                          Partitioner p)
Return an RDD with the pairs from `this` whose keys are not in `other`.


partitionBy

public JavaPairRDD<K,V> partitionBy(Partitioner partitioner)
Return a copy of the RDD partitioned using the specified partitioner.

Parameters:
partitioner - (undocumented)
Returns:
(undocumented)

join

public <W> JavaPairRDD<K,scala.Tuple2<V,W>> join(JavaPairRDD<K,W> other,
                                                 Partitioner partitioner)
Merge the values for each key using an associative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce.

Parameters:
other - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

leftOuterJoin

public <W> JavaPairRDD<K,scala.Tuple2<V,com.google.common.base.Optional<W>>> leftOuterJoin(JavaPairRDD<K,W> other,
                                                                                           Partitioner partitioner)
Perform a left outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (v, Some(w))) for w in other, or the pair (k, (v, None)) if no elements in other have key k. Uses the given Partitioner to partition the output RDD.

Parameters:
other - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

rightOuterJoin

public <W> JavaPairRDD<K,scala.Tuple2<com.google.common.base.Optional<V>,W>> rightOuterJoin(JavaPairRDD<K,W> other,
                                                                                            Partitioner partitioner)
Perform a right outer join of this and other. For each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), w)) for v in this, or the pair (k, (None, w)) if no elements in this have key k. Uses the given Partitioner to partition the output RDD.

Parameters:
other - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

fullOuterJoin

public <W> JavaPairRDD<K,scala.Tuple2<com.google.common.base.Optional<V>,com.google.common.base.Optional<W>>> fullOuterJoin(JavaPairRDD<K,W> other,
                                                                                                                            Partitioner partitioner)
Perform a full outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for w in other, or the pair (k, (Some(v), None)) if no elements in other have key k. Similarly, for each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for v in this, or the pair (k, (None, Some(w))) if no elements in this have key k. Uses the given Partitioner to partition the output RDD.

Parameters:
other - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

combineByKey

public <C> JavaPairRDD<K,C> combineByKey(Function<V,C> createCombiner,
                                         Function2<C,V,C> mergeValue,
                                         Function2<C,C,C> mergeCombiners)
Simplified version of combineByKey that hash-partitions the resulting RDD using the existing partitioner/parallelism level and using map-side aggregation.

Parameters:
createCombiner - (undocumented)
mergeValue - (undocumented)
mergeCombiners - (undocumented)
Returns:
(undocumented)

reduceByKey

public JavaPairRDD<K,V> reduceByKey(Function2<V,V,V> func)
Merge the values for each key using an associative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce. Output will be hash-partitioned with the existing partitioner/ parallelism level.

Parameters:
func - (undocumented)
Returns:
(undocumented)

groupByKey

public JavaPairRDD<K,Iterable<V>> groupByKey()
Group the values for each key in the RDD into a single sequence. Hash-partitions the resulting RDD with the existing partitioner/parallelism level.

Note: If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using JavaPairRDD.reduceByKey or JavaPairRDD.combineByKey will provide much better performance.

Returns:
(undocumented)

join

public <W> JavaPairRDD<K,scala.Tuple2<V,W>> join(JavaPairRDD<K,W> other)
Return an RDD containing all pairs of elements with matching keys in this and other. Each pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in this and (k, v2) is in other. Performs a hash join across the cluster.

Parameters:
other - (undocumented)
Returns:
(undocumented)

join

public <W> JavaPairRDD<K,scala.Tuple2<V,W>> join(JavaPairRDD<K,W> other,
                                                 int numPartitions)
Return an RDD containing all pairs of elements with matching keys in this and other. Each pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in this and (k, v2) is in other. Performs a hash join across the cluster.

Parameters:
other - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

leftOuterJoin

public <W> JavaPairRDD<K,scala.Tuple2<V,com.google.common.base.Optional<W>>> leftOuterJoin(JavaPairRDD<K,W> other)
Perform a left outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (v, Some(w))) for w in other, or the pair (k, (v, None)) if no elements in other have key k. Hash-partitions the output using the existing partitioner/parallelism level.

Parameters:
other - (undocumented)
Returns:
(undocumented)

leftOuterJoin

public <W> JavaPairRDD<K,scala.Tuple2<V,com.google.common.base.Optional<W>>> leftOuterJoin(JavaPairRDD<K,W> other,
                                                                                           int numPartitions)
Perform a left outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (v, Some(w))) for w in other, or the pair (k, (v, None)) if no elements in other have key k. Hash-partitions the output into numPartitions partitions.

Parameters:
other - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

rightOuterJoin

public <W> JavaPairRDD<K,scala.Tuple2<com.google.common.base.Optional<V>,W>> rightOuterJoin(JavaPairRDD<K,W> other)
Perform a right outer join of this and other. For each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), w)) for v in this, or the pair (k, (None, w)) if no elements in this have key k. Hash-partitions the resulting RDD using the existing partitioner/parallelism level.

Parameters:
other - (undocumented)
Returns:
(undocumented)

rightOuterJoin

public <W> JavaPairRDD<K,scala.Tuple2<com.google.common.base.Optional<V>,W>> rightOuterJoin(JavaPairRDD<K,W> other,
                                                                                            int numPartitions)
Perform a right outer join of this and other. For each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), w)) for v in this, or the pair (k, (None, w)) if no elements in this have key k. Hash-partitions the resulting RDD into the given number of partitions.

Parameters:
other - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

fullOuterJoin

public <W> JavaPairRDD<K,scala.Tuple2<com.google.common.base.Optional<V>,com.google.common.base.Optional<W>>> fullOuterJoin(JavaPairRDD<K,W> other)
Perform a full outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for w in other, or the pair (k, (Some(v), None)) if no elements in other have key k. Similarly, for each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for v in this, or the pair (k, (None, Some(w))) if no elements in this have key k. Hash-partitions the resulting RDD using the existing partitioner/ parallelism level.

Parameters:
other - (undocumented)
Returns:
(undocumented)

fullOuterJoin

public <W> JavaPairRDD<K,scala.Tuple2<com.google.common.base.Optional<V>,com.google.common.base.Optional<W>>> fullOuterJoin(JavaPairRDD<K,W> other,
                                                                                                                            int numPartitions)
Perform a full outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for w in other, or the pair (k, (Some(v), None)) if no elements in other have key k. Similarly, for each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for v in this, or the pair (k, (None, Some(w))) if no elements in this have key k. Hash-partitions the resulting RDD into the given number of partitions.

Parameters:
other - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

collectAsMap

public java.util.Map<K,V> collectAsMap()
Return the key-value pairs in this RDD to the master as a Map.

Returns:
(undocumented)

mapValues

public <U> JavaPairRDD<K,U> mapValues(Function<V,U> f)
Pass each value in the key-value pair RDD through a map function without changing the keys; this also retains the original RDD's partitioning.

Parameters:
f - (undocumented)
Returns:
(undocumented)

flatMapValues

public <U> JavaPairRDD<K,U> flatMapValues(Function<V,Iterable<U>> f)
Pass each value in the key-value pair RDD through a flatMap function without changing the keys; this also retains the original RDD's partitioning.

Parameters:
f - (undocumented)
Returns:
(undocumented)

cogroup

public <W> JavaPairRDD<K,scala.Tuple2<Iterable<V>,Iterable<W>>> cogroup(JavaPairRDD<K,W> other,
                                                                        Partitioner partitioner)
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.

Parameters:
other - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

cogroup

public <W1,W2> JavaPairRDD<K,scala.Tuple3<Iterable<V>,Iterable<W1>,Iterable<W2>>> cogroup(JavaPairRDD<K,W1> other1,
                                                                                          JavaPairRDD<K,W2> other2,
                                                                                          Partitioner partitioner)
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.

Parameters:
other1 - (undocumented)
other2 - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

cogroup

public <W1,W2,W3> JavaPairRDD<K,scala.Tuple4<Iterable<V>,Iterable<W1>,Iterable<W2>,Iterable<W3>>> cogroup(JavaPairRDD<K,W1> other1,
                                                                                                          JavaPairRDD<K,W2> other2,
                                                                                                          JavaPairRDD<K,W3> other3,
                                                                                                          Partitioner partitioner)
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.

Parameters:
other1 - (undocumented)
other2 - (undocumented)
other3 - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

cogroup

public <W> JavaPairRDD<K,scala.Tuple2<Iterable<V>,Iterable<W>>> cogroup(JavaPairRDD<K,W> other)
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.

Parameters:
other - (undocumented)
Returns:
(undocumented)

cogroup

public <W1,W2> JavaPairRDD<K,scala.Tuple3<Iterable<V>,Iterable<W1>,Iterable<W2>>> cogroup(JavaPairRDD<K,W1> other1,
                                                                                          JavaPairRDD<K,W2> other2)
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.

Parameters:
other1 - (undocumented)
other2 - (undocumented)
Returns:
(undocumented)

cogroup

public <W1,W2,W3> JavaPairRDD<K,scala.Tuple4<Iterable<V>,Iterable<W1>,Iterable<W2>,Iterable<W3>>> cogroup(JavaPairRDD<K,W1> other1,
                                                                                                          JavaPairRDD<K,W2> other2,
                                                                                                          JavaPairRDD<K,W3> other3)
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.

Parameters:
other1 - (undocumented)
other2 - (undocumented)
other3 - (undocumented)
Returns:
(undocumented)

cogroup

public <W> JavaPairRDD<K,scala.Tuple2<Iterable<V>,Iterable<W>>> cogroup(JavaPairRDD<K,W> other,
                                                                        int numPartitions)
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.

Parameters:
other - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

cogroup

public <W1,W2> JavaPairRDD<K,scala.Tuple3<Iterable<V>,Iterable<W1>,Iterable<W2>>> cogroup(JavaPairRDD<K,W1> other1,
                                                                                          JavaPairRDD<K,W2> other2,
                                                                                          int numPartitions)
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.

Parameters:
other1 - (undocumented)
other2 - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

cogroup

public <W1,W2,W3> JavaPairRDD<K,scala.Tuple4<Iterable<V>,Iterable<W1>,Iterable<W2>,Iterable<W3>>> cogroup(JavaPairRDD<K,W1> other1,
                                                                                                          JavaPairRDD<K,W2> other2,
                                                                                                          JavaPairRDD<K,W3> other3,
                                                                                                          int numPartitions)
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.

Parameters:
other1 - (undocumented)
other2 - (undocumented)
other3 - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

groupWith

public <W> JavaPairRDD<K,scala.Tuple2<Iterable<V>,Iterable<W>>> groupWith(JavaPairRDD<K,W> other)
Alias for cogroup.


groupWith

public <W1,W2> JavaPairRDD<K,scala.Tuple3<Iterable<V>,Iterable<W1>,Iterable<W2>>> groupWith(JavaPairRDD<K,W1> other1,
                                                                                            JavaPairRDD<K,W2> other2)
Alias for cogroup.


groupWith

public <W1,W2,W3> JavaPairRDD<K,scala.Tuple4<Iterable<V>,Iterable<W1>,Iterable<W2>,Iterable<W3>>> groupWith(JavaPairRDD<K,W1> other1,
                                                                                                            JavaPairRDD<K,W2> other2,
                                                                                                            JavaPairRDD<K,W3> other3)
Alias for cogroup.


lookup

public java.util.List<V> lookup(K key)
Return the list of values in the RDD for key key. This operation is done efficiently if the RDD has a known partitioner by only searching the partition that the key maps to.

Parameters:
key - (undocumented)
Returns:
(undocumented)

saveAsHadoopFile

public <F extends org.apache.hadoop.mapred.OutputFormat<?,?>> void saveAsHadoopFile(String path,
                                                                                    Class<?> keyClass,
                                                                                    Class<?> valueClass,
                                                                                    Class<F> outputFormatClass,
                                                                                    org.apache.hadoop.mapred.JobConf conf)
Output the RDD to any Hadoop-supported file system.


saveAsHadoopFile

public <F extends org.apache.hadoop.mapred.OutputFormat<?,?>> void saveAsHadoopFile(String path,
                                                                                    Class<?> keyClass,
                                                                                    Class<?> valueClass,
                                                                                    Class<F> outputFormatClass)
Output the RDD to any Hadoop-supported file system.


saveAsHadoopFile

public <F extends org.apache.hadoop.mapred.OutputFormat<?,?>> void saveAsHadoopFile(String path,
                                                                                    Class<?> keyClass,
                                                                                    Class<?> valueClass,
                                                                                    Class<F> outputFormatClass,
                                                                                    Class<? extends org.apache.hadoop.io.compress.CompressionCodec> codec)
Output the RDD to any Hadoop-supported file system, compressing with the supplied codec.


saveAsNewAPIHadoopFile

public <F extends org.apache.hadoop.mapreduce.OutputFormat<?,?>> void saveAsNewAPIHadoopFile(String path,
                                                                                             Class<?> keyClass,
                                                                                             Class<?> valueClass,
                                                                                             Class<F> outputFormatClass,
                                                                                             org.apache.hadoop.conf.Configuration conf)
Output the RDD to any Hadoop-supported file system.


saveAsNewAPIHadoopDataset

public void saveAsNewAPIHadoopDataset(org.apache.hadoop.conf.Configuration conf)
Output the RDD to any Hadoop-supported storage system, using a Configuration object for that storage system.

Parameters:
conf - (undocumented)

saveAsNewAPIHadoopFile

public <F extends org.apache.hadoop.mapreduce.OutputFormat<?,?>> void saveAsNewAPIHadoopFile(String path,
                                                                                             Class<?> keyClass,
                                                                                             Class<?> valueClass,
                                                                                             Class<F> outputFormatClass)
Output the RDD to any Hadoop-supported file system.


saveAsHadoopDataset

public void saveAsHadoopDataset(org.apache.hadoop.mapred.JobConf conf)
Output the RDD to any Hadoop-supported storage system, using a Hadoop JobConf object for that storage system. The JobConf should set an OutputFormat and any output paths required (e.g. a table name to write to) in the same way as it would be configured for a Hadoop MapReduce job.

Parameters:
conf - (undocumented)

repartitionAndSortWithinPartitions

public JavaPairRDD<K,V> repartitionAndSortWithinPartitions(Partitioner partitioner)
Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.

This is more efficient than calling repartition and then sorting within each partition because it can push the sorting down into the shuffle machinery.

Parameters:
partitioner - (undocumented)
Returns:
(undocumented)

repartitionAndSortWithinPartitions

public JavaPairRDD<K,V> repartitionAndSortWithinPartitions(Partitioner partitioner,
                                                           java.util.Comparator<K> comp)
Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.

This is more efficient than calling repartition and then sorting within each partition because it can push the sorting down into the shuffle machinery.

Parameters:
partitioner - (undocumented)
comp - (undocumented)
Returns:
(undocumented)

sortByKey

public JavaPairRDD<K,V> sortByKey()
Sort the RDD by key, so that each partition contains a sorted range of the elements in ascending order. Calling collect or save on the resulting RDD will return or output an ordered list of records (in the save case, they will be written to multiple part-X files in the filesystem, in order of the keys).

Returns:
(undocumented)

sortByKey

public JavaPairRDD<K,V> sortByKey(boolean ascending)
Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling collect or save on the resulting RDD will return or output an ordered list of records (in the save case, they will be written to multiple part-X files in the filesystem, in order of the keys).

Parameters:
ascending - (undocumented)
Returns:
(undocumented)

sortByKey

public JavaPairRDD<K,V> sortByKey(boolean ascending,
                                  int numPartitions)
Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling collect or save on the resulting RDD will return or output an ordered list of records (in the save case, they will be written to multiple part-X files in the filesystem, in order of the keys).

Parameters:
ascending - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

sortByKey

public JavaPairRDD<K,V> sortByKey(java.util.Comparator<K> comp)
Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling collect or save on the resulting RDD will return or output an ordered list of records (in the save case, they will be written to multiple part-X files in the filesystem, in order of the keys).

Parameters:
comp - (undocumented)
Returns:
(undocumented)

sortByKey

public JavaPairRDD<K,V> sortByKey(java.util.Comparator<K> comp,
                                  boolean ascending)
Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling collect or save on the resulting RDD will return or output an ordered list of records (in the save case, they will be written to multiple part-X files in the filesystem, in order of the keys).

Parameters:
comp - (undocumented)
ascending - (undocumented)
Returns:
(undocumented)

sortByKey

public JavaPairRDD<K,V> sortByKey(java.util.Comparator<K> comp,
                                  boolean ascending,
                                  int numPartitions)
Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling collect or save on the resulting RDD will return or output an ordered list of records (in the save case, they will be written to multiple part-X files in the filesystem, in order of the keys).

Parameters:
comp - (undocumented)
ascending - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

keys

public JavaRDD<K> keys()
Return an RDD with the keys of each tuple.

Returns:
(undocumented)

values

public JavaRDD<V> values()
Return an RDD with the values of each tuple.

Returns:
(undocumented)

countApproxDistinctByKey

public JavaPairRDD<K,Object> countApproxDistinctByKey(double relativeSD,
                                                      Partitioner partitioner)
Return approximate number of distinct values for each key in this RDD.

The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.

Parameters:
relativeSD - Relative accuracy. Smaller values create counters that require more space. It must be greater than 0.000017.
partitioner - partitioner of the resulting RDD.
Returns:
(undocumented)

countApproxDistinctByKey

public JavaPairRDD<K,Object> countApproxDistinctByKey(double relativeSD,
                                                      int numPartitions)
Return approximate number of distinct values for each key in this RDD.

The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.

Parameters:
relativeSD - Relative accuracy. Smaller values create counters that require more space. It must be greater than 0.000017.
numPartitions - number of partitions of the resulting RDD.
Returns:
(undocumented)

countApproxDistinctByKey

public JavaPairRDD<K,Object> countApproxDistinctByKey(double relativeSD)
Return approximate number of distinct values for each key in this RDD.

The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here.

Parameters:
relativeSD - Relative accuracy. Smaller values create counters that require more space. It must be greater than 0.000017.
Returns:
(undocumented)

setName

public JavaPairRDD<K,V> setName(String name)
Assign a name to this RDD