org.apache.spark.streaming.api.java
Class JavaPairDStream<K,V>

Object
  extended by org.apache.spark.streaming.api.java.JavaPairDStream<K,V>
All Implemented Interfaces:
java.io.Serializable, JavaDStreamLike<scala.Tuple2<K,V>,JavaPairDStream<K,V>,JavaPairRDD<K,V>>
Direct Known Subclasses:
JavaPairInputDStream

public class JavaPairDStream<K,V>
extends Object

A Java-friendly interface to a DStream of key-value pairs, which provides extra methods like reduceByKey and join.

See Also:
Serialized Form

Constructor Summary
JavaPairDStream(DStream<scala.Tuple2<K,V>> dstream, scala.reflect.ClassTag<K> kManifest, scala.reflect.ClassTag<V> vManifest)
           
 
Method Summary
 JavaPairDStream<K,V> cache()
          Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
 scala.reflect.ClassTag<scala.Tuple2<K,V>> classTag()
           
<W> JavaPairDStream<K,scala.Tuple2<Iterable<V>,Iterable<W>>>
cogroup(JavaPairDStream<K,W> other)
          Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
<W> JavaPairDStream<K,scala.Tuple2<Iterable<V>,Iterable<W>>>
cogroup(JavaPairDStream<K,W> other, int numPartitions)
          Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
<W> JavaPairDStream<K,scala.Tuple2<Iterable<V>,Iterable<W>>>
cogroup(JavaPairDStream<K,W> other, Partitioner partitioner)
          Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
<C> JavaPairDStream<K,C>
combineByKey(Function<V,C> createCombiner, Function2<C,V,C> mergeValue, Function2<C,C,C> mergeCombiners, Partitioner partitioner)
          Combine elements of each key in DStream's RDDs using custom function.
<C> JavaPairDStream<K,C>
combineByKey(Function<V,C> createCombiner, Function2<C,V,C> mergeValue, Function2<C,C,C> mergeCombiners, Partitioner partitioner, boolean mapSideCombine)
          Combine elements of each key in DStream's RDDs using custom function.
 JavaPairRDD<K,V> compute(Time validTime)
          Method that generates a RDD for the given Duration
 DStream<scala.Tuple2<K,V>> dstream()
           
 JavaPairDStream<K,V> filter(Function<scala.Tuple2<K,V>,Boolean> f)
          Return a new DStream containing only the elements that satisfy a predicate.
<U> JavaPairDStream<K,U>
flatMapValues(Function<V,Iterable<U>> f)
          Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.
static
<K,V> JavaPairDStream<K,V>
fromJavaDStream(JavaDStream<scala.Tuple2<K,V>> dstream)
           
static
<K,V> JavaPairDStream<K,V>
fromPairDStream(DStream<scala.Tuple2<K,V>> dstream, scala.reflect.ClassTag<K> evidence$1, scala.reflect.ClassTag<V> evidence$2)
           
<W> JavaPairDStream<K,scala.Tuple2<com.google.common.base.Optional<V>,com.google.common.base.Optional<W>>>
fullOuterJoin(JavaPairDStream<K,W> other)
          Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
<W> JavaPairDStream<K,scala.Tuple2<com.google.common.base.Optional<V>,com.google.common.base.Optional<W>>>
fullOuterJoin(JavaPairDStream<K,W> other, int numPartitions)
          Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
<W> JavaPairDStream<K,scala.Tuple2<com.google.common.base.Optional<V>,com.google.common.base.Optional<W>>>
fullOuterJoin(JavaPairDStream<K,W> other, Partitioner partitioner)
          Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.
 JavaPairDStream<K,Iterable<V>> groupByKey()
          Return a new DStream by applying groupByKey to each RDD.
 JavaPairDStream<K,Iterable<V>> groupByKey(int numPartitions)
          Return a new DStream by applying groupByKey to each RDD.
 JavaPairDStream<K,Iterable<V>> groupByKey(Partitioner partitioner)
          Return a new DStream by applying groupByKey on each RDD of this DStream.
 JavaPairDStream<K,Iterable<V>> groupByKeyAndWindow(Duration windowDuration)
          Return a new DStream by applying groupByKey over a sliding window.
 JavaPairDStream<K,Iterable<V>> groupByKeyAndWindow(Duration windowDuration, Duration slideDuration)
          Return a new DStream by applying groupByKey over a sliding window.
 JavaPairDStream<K,Iterable<V>> groupByKeyAndWindow(Duration windowDuration, Duration slideDuration, int numPartitions)
          Return a new DStream by applying groupByKey over a sliding window on this DStream.
 JavaPairDStream<K,Iterable<V>> groupByKeyAndWindow(Duration windowDuration, Duration slideDuration, Partitioner partitioner)
          Return a new DStream by applying groupByKey over a sliding window on this DStream.
<W> JavaPairDStream<K,scala.Tuple2<V,W>>
join(JavaPairDStream<K,W> other)
          Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
<W> JavaPairDStream<K,scala.Tuple2<V,W>>
join(JavaPairDStream<K,W> other, int numPartitions)
          Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
<W> JavaPairDStream<K,scala.Tuple2<V,W>>
join(JavaPairDStream<K,W> other, Partitioner partitioner)
          Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.
 scala.reflect.ClassTag<K> kManifest()
           
<W> JavaPairDStream<K,scala.Tuple2<V,com.google.common.base.Optional<W>>>
leftOuterJoin(JavaPairDStream<K,W> other)
          Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.
<W> JavaPairDStream<K,scala.Tuple2<V,com.google.common.base.Optional<W>>>
leftOuterJoin(JavaPairDStream<K,W> other, int numPartitions)
          Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.
<W> JavaPairDStream<K,scala.Tuple2<V,com.google.common.base.Optional<W>>>
leftOuterJoin(JavaPairDStream<K,W> other, Partitioner partitioner)
          Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.
<U> JavaPairDStream<K,U>
mapValues(Function<V,U> f)
          Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.
 JavaPairDStream<K,V> persist()
          Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
 JavaPairDStream<K,V> persist(StorageLevel storageLevel)
          Persist the RDDs of this DStream with the given storage level
 JavaPairDStream<K,V> reduceByKey(Function2<V,V,V> func)
          Return a new DStream by applying reduceByKey to each RDD.
 JavaPairDStream<K,V> reduceByKey(Function2<V,V,V> func, int numPartitions)
          Return a new DStream by applying reduceByKey to each RDD.
 JavaPairDStream<K,V> reduceByKey(Function2<V,V,V> func, Partitioner partitioner)
          Return a new DStream by applying reduceByKey to each RDD.
 JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc, Duration windowDuration)
          Create a new DStream by applying reduceByKey over a sliding window on this DStream.
 JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc, Duration windowDuration, Duration slideDuration)
          Return a new DStream by applying reduceByKey over a sliding window.
 JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc, Duration windowDuration, Duration slideDuration, int numPartitions)
          Return a new DStream by applying reduceByKey over a sliding window.
 JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc, Duration windowDuration, Duration slideDuration, Partitioner partitioner)
          Return a new DStream by applying reduceByKey over a sliding window.
 JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc, Function2<V,V,V> invReduceFunc, Duration windowDuration, Duration slideDuration)
          Return a new DStream by reducing over a using incremental computation.
 JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc, Function2<V,V,V> invReduceFunc, Duration windowDuration, Duration slideDuration, int numPartitions, Function<scala.Tuple2<K,V>,Boolean> filterFunc)
          Return a new DStream by applying incremental reduceByKey over a sliding window.
 JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc, Function2<V,V,V> invReduceFunc, Duration windowDuration, Duration slideDuration, Partitioner partitioner, Function<scala.Tuple2<K,V>,Boolean> filterFunc)
          Return a new DStream by applying incremental reduceByKey over a sliding window.
 JavaPairDStream<K,V> repartition(int numPartitions)
          Return a new DStream with an increased or decreased level of parallelism.
<W> JavaPairDStream<K,scala.Tuple2<com.google.common.base.Optional<V>,W>>
rightOuterJoin(JavaPairDStream<K,W> other)
          Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.
<W> JavaPairDStream<K,scala.Tuple2<com.google.common.base.Optional<V>,W>>
rightOuterJoin(JavaPairDStream<K,W> other, int numPartitions)
          Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.
<W> JavaPairDStream<K,scala.Tuple2<com.google.common.base.Optional<V>,W>>
rightOuterJoin(JavaPairDStream<K,W> other, Partitioner partitioner)
          Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.
 void saveAsHadoopFiles(String prefix, String suffix)
          Save each RDD in this DStream as a Hadoop file.
<F extends org.apache.hadoop.mapred.OutputFormat<?,?>>
void
saveAsHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<F> outputFormatClass)
          Save each RDD in this DStream as a Hadoop file.
<F extends org.apache.hadoop.mapred.OutputFormat<?,?>>
void
saveAsHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<F> outputFormatClass, org.apache.hadoop.mapred.JobConf conf)
          Save each RDD in this DStream as a Hadoop file.
 void saveAsNewAPIHadoopFiles(String prefix, String suffix)
          Save each RDD in this DStream as a Hadoop file.
<F extends org.apache.hadoop.mapreduce.OutputFormat<?,?>>
void
saveAsNewAPIHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<F> outputFormatClass)
          Save each RDD in this DStream as a Hadoop file.
<F extends org.apache.hadoop.mapreduce.OutputFormat<?,?>>
void
saveAsNewAPIHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<F> outputFormatClass, org.apache.hadoop.conf.Configuration conf)
          Save each RDD in this DStream as a Hadoop file.
static
<K> JavaPairDStream<K,Long>
scalaToJavaLong(JavaPairDStream<K,Object> dstream, scala.reflect.ClassTag<K> evidence$3)
           
 JavaDStream<scala.Tuple2<K,V>> toJavaDStream()
          Convert to a JavaDStream
 JavaPairDStream<K,V> union(JavaPairDStream<K,V> that)
          Return a new DStream by unifying data of another DStream with this DStream.
<S> JavaPairDStream<K,S>
updateStateByKey(Function2<java.util.List<V>,com.google.common.base.Optional<S>,com.google.common.base.Optional<S>> updateFunc)
          Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
<S> JavaPairDStream<K,S>
updateStateByKey(Function2<java.util.List<V>,com.google.common.base.Optional<S>,com.google.common.base.Optional<S>> updateFunc, int numPartitions)
          Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
<S> JavaPairDStream<K,S>
updateStateByKey(Function2<java.util.List<V>,com.google.common.base.Optional<S>,com.google.common.base.Optional<S>> updateFunc, Partitioner partitioner)
          Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
<S> JavaPairDStream<K,S>
updateStateByKey(Function2<java.util.List<V>,com.google.common.base.Optional<S>,com.google.common.base.Optional<S>> updateFunc, Partitioner partitioner, JavaPairRDD<K,S> initialRDD)
          Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
 scala.reflect.ClassTag<V> vManifest()
           
 JavaPairDStream<K,V> window(Duration windowDuration)
          Return a new DStream which is computed based on windowed batches of this DStream.
 JavaPairDStream<K,V> window(Duration windowDuration, Duration slideDuration)
          Return a new DStream which is computed based on windowed batches of this DStream.
 JavaPairRDD<K,V> wrapRDD(RDD<scala.Tuple2<K,V>> rdd)
           
 
Methods inherited from class Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 
Methods inherited from interface org.apache.spark.streaming.api.java.JavaDStreamLike
checkpoint, context, count, countByValue, countByValue, countByValueAndWindow, countByValueAndWindow, countByWindow, flatMap, flatMapToPair, foreach, foreach, foreachRDD, foreachRDD, glom, map, mapPartitions, mapPartitionsToPair, mapToPair, print, print, reduce, reduceByWindow, reduceByWindow, reduceByWindow, scalaIntToJavaLong, slice, transform, transform, transformToPair, transformToPair, transformWith, transformWith, transformWithToPair, transformWithToPair
 

Constructor Detail

JavaPairDStream

public JavaPairDStream(DStream<scala.Tuple2<K,V>> dstream,
                       scala.reflect.ClassTag<K> kManifest,
                       scala.reflect.ClassTag<V> vManifest)
Method Detail

fromPairDStream

public static <K,V> JavaPairDStream<K,V> fromPairDStream(DStream<scala.Tuple2<K,V>> dstream,
                                                         scala.reflect.ClassTag<K> evidence$1,
                                                         scala.reflect.ClassTag<V> evidence$2)

fromJavaDStream

public static <K,V> JavaPairDStream<K,V> fromJavaDStream(JavaDStream<scala.Tuple2<K,V>> dstream)

scalaToJavaLong

public static <K> JavaPairDStream<K,Long> scalaToJavaLong(JavaPairDStream<K,Object> dstream,
                                                          scala.reflect.ClassTag<K> evidence$3)

dstream

public DStream<scala.Tuple2<K,V>> dstream()

kManifest

public scala.reflect.ClassTag<K> kManifest()

vManifest

public scala.reflect.ClassTag<V> vManifest()

wrapRDD

public JavaPairRDD<K,V> wrapRDD(RDD<scala.Tuple2<K,V>> rdd)

filter

public JavaPairDStream<K,V> filter(Function<scala.Tuple2<K,V>,Boolean> f)
Return a new DStream containing only the elements that satisfy a predicate.


cache

public JavaPairDStream<K,V> cache()
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)


persist

public JavaPairDStream<K,V> persist()
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)


persist

public JavaPairDStream<K,V> persist(StorageLevel storageLevel)
Persist the RDDs of this DStream with the given storage level


repartition

public JavaPairDStream<K,V> repartition(int numPartitions)
Return a new DStream with an increased or decreased level of parallelism. Each RDD in the returned DStream has exactly numPartitions partitions.

Parameters:
numPartitions - (undocumented)
Returns:
(undocumented)

compute

public JavaPairRDD<K,V> compute(Time validTime)
Method that generates a RDD for the given Duration


window

public JavaPairDStream<K,V> window(Duration windowDuration)
Return a new DStream which is computed based on windowed batches of this DStream. The new DStream generates RDDs with the same interval as this DStream.

Parameters:
windowDuration - width of the window; must be a multiple of this DStream's interval.
Returns:

window

public JavaPairDStream<K,V> window(Duration windowDuration,
                                   Duration slideDuration)
Return a new DStream which is computed based on windowed batches of this DStream.

Parameters:
windowDuration - duration (i.e., width) of the window; must be a multiple of this DStream's interval
slideDuration - sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's interval
Returns:
(undocumented)

union

public JavaPairDStream<K,V> union(JavaPairDStream<K,V> that)
Return a new DStream by unifying data of another DStream with this DStream.

Parameters:
that - Another DStream having the same interval (i.e., slideDuration) as this DStream.
Returns:
(undocumented)

groupByKey

public JavaPairDStream<K,Iterable<V>> groupByKey()
Return a new DStream by applying groupByKey to each RDD. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

Returns:
(undocumented)

groupByKey

public JavaPairDStream<K,Iterable<V>> groupByKey(int numPartitions)
Return a new DStream by applying groupByKey to each RDD. Hash partitioning is used to generate the RDDs with numPartitions partitions.

Parameters:
numPartitions - (undocumented)
Returns:
(undocumented)

groupByKey

public JavaPairDStream<K,Iterable<V>> groupByKey(Partitioner partitioner)
Return a new DStream by applying groupByKey on each RDD of this DStream. Therefore, the values for each key in this DStream's RDDs are grouped into a single sequence to generate the RDDs of the new DStream. org.apache.spark.Partitioner is used to control the partitioning of each RDD.

Parameters:
partitioner - (undocumented)
Returns:
(undocumented)

reduceByKey

public JavaPairDStream<K,V> reduceByKey(Function2<V,V,V> func)
Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the associative reduce function. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

Parameters:
func - (undocumented)
Returns:
(undocumented)

reduceByKey

public JavaPairDStream<K,V> reduceByKey(Function2<V,V,V> func,
                                        int numPartitions)
Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the supplied reduce function. Hash partitioning is used to generate the RDDs with numPartitions partitions.

Parameters:
func - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

reduceByKey

public JavaPairDStream<K,V> reduceByKey(Function2<V,V,V> func,
                                        Partitioner partitioner)
Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the supplied reduce function. org.apache.spark.Partitioner is used to control thepartitioning of each RDD.

Parameters:
func - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

combineByKey

public <C> JavaPairDStream<K,C> combineByKey(Function<V,C> createCombiner,
                                             Function2<C,V,C> mergeValue,
                                             Function2<C,C,C> mergeCombiners,
                                             Partitioner partitioner)
Combine elements of each key in DStream's RDDs using custom function. This is similar to the combineByKey for RDDs. Please refer to combineByKey in org.apache.spark.rdd.PairRDDFunctions for more information.

Parameters:
createCombiner - (undocumented)
mergeValue - (undocumented)
mergeCombiners - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

combineByKey

public <C> JavaPairDStream<K,C> combineByKey(Function<V,C> createCombiner,
                                             Function2<C,V,C> mergeValue,
                                             Function2<C,C,C> mergeCombiners,
                                             Partitioner partitioner,
                                             boolean mapSideCombine)
Combine elements of each key in DStream's RDDs using custom function. This is similar to the combineByKey for RDDs. Please refer to combineByKey in org.apache.spark.rdd.PairRDDFunctions for more information.

Parameters:
createCombiner - (undocumented)
mergeValue - (undocumented)
mergeCombiners - (undocumented)
partitioner - (undocumented)
mapSideCombine - (undocumented)
Returns:
(undocumented)

groupByKeyAndWindow

public JavaPairDStream<K,Iterable<V>> groupByKeyAndWindow(Duration windowDuration)
Return a new DStream by applying groupByKey over a sliding window. This is similar to DStream.groupByKey() but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

Parameters:
windowDuration - width of the window; must be a multiple of this DStream's batching interval
Returns:
(undocumented)

groupByKeyAndWindow

public JavaPairDStream<K,Iterable<V>> groupByKeyAndWindow(Duration windowDuration,
                                                          Duration slideDuration)
Return a new DStream by applying groupByKey over a sliding window. Similar to DStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

Parameters:
windowDuration - width of the window; must be a multiple of this DStream's batching interval
slideDuration - sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
Returns:
(undocumented)

groupByKeyAndWindow

public JavaPairDStream<K,Iterable<V>> groupByKeyAndWindow(Duration windowDuration,
                                                          Duration slideDuration,
                                                          int numPartitions)
Return a new DStream by applying groupByKey over a sliding window on this DStream. Similar to DStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs with numPartitions partitions.

Parameters:
windowDuration - width of the window; must be a multiple of this DStream's batching interval
slideDuration - sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
numPartitions - Number of partitions of each RDD in the new DStream.
Returns:
(undocumented)

groupByKeyAndWindow

public JavaPairDStream<K,Iterable<V>> groupByKeyAndWindow(Duration windowDuration,
                                                          Duration slideDuration,
                                                          Partitioner partitioner)
Return a new DStream by applying groupByKey over a sliding window on this DStream. Similar to DStream.groupByKey(), but applies it over a sliding window.

Parameters:
windowDuration - width of the window; must be a multiple of this DStream's batching interval
slideDuration - sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
partitioner - Partitioner for controlling the partitioning of each RDD in the new DStream.
Returns:
(undocumented)

reduceByKeyAndWindow

public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc,
                                                 Duration windowDuration)
Create a new DStream by applying reduceByKey over a sliding window on this DStream. Similar to DStream.reduceByKey(), but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

Parameters:
reduceFunc - associative reduce function
windowDuration - width of the window; must be a multiple of this DStream's batching interval
Returns:
(undocumented)

reduceByKeyAndWindow

public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc,
                                                 Duration windowDuration,
                                                 Duration slideDuration)
Return a new DStream by applying reduceByKey over a sliding window. This is similar to DStream.reduceByKey() but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

Parameters:
reduceFunc - associative reduce function
windowDuration - width of the window; must be a multiple of this DStream's batching interval
slideDuration - sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
Returns:
(undocumented)

reduceByKeyAndWindow

public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc,
                                                 Duration windowDuration,
                                                 Duration slideDuration,
                                                 int numPartitions)
Return a new DStream by applying reduceByKey over a sliding window. This is similar to DStream.reduceByKey() but applies it over a sliding window. Hash partitioning is used to generate the RDDs with numPartitions partitions.

Parameters:
reduceFunc - associative reduce function
windowDuration - width of the window; must be a multiple of this DStream's batching interval
slideDuration - sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
numPartitions - Number of partitions of each RDD in the new DStream.
Returns:
(undocumented)

reduceByKeyAndWindow

public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc,
                                                 Duration windowDuration,
                                                 Duration slideDuration,
                                                 Partitioner partitioner)
Return a new DStream by applying reduceByKey over a sliding window. Similar to DStream.reduceByKey(), but applies it over a sliding window.

Parameters:
reduceFunc - associative reduce function
windowDuration - width of the window; must be a multiple of this DStream's batching interval
slideDuration - sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
partitioner - Partitioner for controlling the partitioning of each RDD in the new DStream.
Returns:
(undocumented)

reduceByKeyAndWindow

public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc,
                                                 Function2<V,V,V> invReduceFunc,
                                                 Duration windowDuration,
                                                 Duration slideDuration)
Return a new DStream by reducing over a using incremental computation. The reduced value of over a new window is calculated using the old window's reduce value : 1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient that reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions". Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

Parameters:
reduceFunc - associative reduce function
invReduceFunc - inverse function
windowDuration - width of the window; must be a multiple of this DStream's batching interval
slideDuration - sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
Returns:
(undocumented)

reduceByKeyAndWindow

public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc,
                                                 Function2<V,V,V> invReduceFunc,
                                                 Duration windowDuration,
                                                 Duration slideDuration,
                                                 int numPartitions,
                                                 Function<scala.Tuple2<K,V>,Boolean> filterFunc)
Return a new DStream by applying incremental reduceByKey over a sliding window. The reduced value of over a new window is calculated using the old window's reduce value : 1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient that reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions". Hash partitioning is used to generate the RDDs with numPartitions partitions.

Parameters:
reduceFunc - associative reduce function
invReduceFunc - inverse function
windowDuration - width of the window; must be a multiple of this DStream's batching interval
slideDuration - sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
numPartitions - number of partitions of each RDD in the new DStream.
filterFunc - function to filter expired key-value pairs; only pairs that satisfy the function are retained set this to null if you do not want to filter
Returns:
(undocumented)

reduceByKeyAndWindow

public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V,V,V> reduceFunc,
                                                 Function2<V,V,V> invReduceFunc,
                                                 Duration windowDuration,
                                                 Duration slideDuration,
                                                 Partitioner partitioner,
                                                 Function<scala.Tuple2<K,V>,Boolean> filterFunc)
Return a new DStream by applying incremental reduceByKey over a sliding window. The reduced value of over a new window is calculated using the old window's reduce value : 1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient that reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions".

Parameters:
reduceFunc - associative reduce function
invReduceFunc - inverse function
windowDuration - width of the window; must be a multiple of this DStream's batching interval
slideDuration - sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
partitioner - Partitioner for controlling the partitioning of each RDD in the new DStream.
filterFunc - function to filter expired key-value pairs; only pairs that satisfy the function are retained set this to null if you do not want to filter
Returns:
(undocumented)

updateStateByKey

public <S> JavaPairDStream<K,S> updateStateByKey(Function2<java.util.List<V>,com.google.common.base.Optional<S>,com.google.common.base.Optional<S>> updateFunc)
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

Parameters:
updateFunc - State update function. If this function returns None, then corresponding state key-value pair will be eliminated.
Returns:
(undocumented)

updateStateByKey

public <S> JavaPairDStream<K,S> updateStateByKey(Function2<java.util.List<V>,com.google.common.base.Optional<S>,com.google.common.base.Optional<S>> updateFunc,
                                                 int numPartitions)
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. Hash partitioning is used to generate the RDDs with numPartitions partitions.

Parameters:
updateFunc - State update function. If this function returns None, then corresponding state key-value pair will be eliminated.
numPartitions - Number of partitions of each RDD in the new DStream.
Returns:
(undocumented)

updateStateByKey

public <S> JavaPairDStream<K,S> updateStateByKey(Function2<java.util.List<V>,com.google.common.base.Optional<S>,com.google.common.base.Optional<S>> updateFunc,
                                                 Partitioner partitioner)
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. org.apache.spark.Partitioner is used to control the partitioning of each RDD.

Parameters:
updateFunc - State update function. If this function returns None, then corresponding state key-value pair will be eliminated.
partitioner - Partitioner for controlling the partitioning of each RDD in the new DStream.
Returns:
(undocumented)

updateStateByKey

public <S> JavaPairDStream<K,S> updateStateByKey(Function2<java.util.List<V>,com.google.common.base.Optional<S>,com.google.common.base.Optional<S>> updateFunc,
                                                 Partitioner partitioner,
                                                 JavaPairRDD<K,S> initialRDD)
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. org.apache.spark.Partitioner is used to control the partitioning of each RDD.

Parameters:
updateFunc - State update function. If this function returns None, then corresponding state key-value pair will be eliminated.
partitioner - Partitioner for controlling the partitioning of each RDD in the new DStream.
initialRDD - initial state value of each key.
Returns:
(undocumented)

mapValues

public <U> JavaPairDStream<K,U> mapValues(Function<V,U> f)
Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.

Parameters:
f - (undocumented)
Returns:
(undocumented)

flatMapValues

public <U> JavaPairDStream<K,U> flatMapValues(Function<V,Iterable<U>> f)
Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.

Parameters:
f - (undocumented)
Returns:
(undocumented)

cogroup

public <W> JavaPairDStream<K,scala.Tuple2<Iterable<V>,Iterable<W>>> cogroup(JavaPairDStream<K,W> other)
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

Parameters:
other - (undocumented)
Returns:
(undocumented)

cogroup

public <W> JavaPairDStream<K,scala.Tuple2<Iterable<V>,Iterable<W>>> cogroup(JavaPairDStream<K,W> other,
                                                                            int numPartitions)
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

Parameters:
other - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

cogroup

public <W> JavaPairDStream<K,scala.Tuple2<Iterable<V>,Iterable<W>>> cogroup(JavaPairDStream<K,W> other,
                                                                            Partitioner partitioner)
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

Parameters:
other - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

join

public <W> JavaPairDStream<K,scala.Tuple2<V,W>> join(JavaPairDStream<K,W> other)
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

Parameters:
other - (undocumented)
Returns:
(undocumented)

join

public <W> JavaPairDStream<K,scala.Tuple2<V,W>> join(JavaPairDStream<K,W> other,
                                                     int numPartitions)
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

Parameters:
other - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

join

public <W> JavaPairDStream<K,scala.Tuple2<V,W>> join(JavaPairDStream<K,W> other,
                                                     Partitioner partitioner)
Return a new DStream by applying 'join' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

Parameters:
other - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

leftOuterJoin

public <W> JavaPairDStream<K,scala.Tuple2<V,com.google.common.base.Optional<W>>> leftOuterJoin(JavaPairDStream<K,W> other)
Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

Parameters:
other - (undocumented)
Returns:
(undocumented)

leftOuterJoin

public <W> JavaPairDStream<K,scala.Tuple2<V,com.google.common.base.Optional<W>>> leftOuterJoin(JavaPairDStream<K,W> other,
                                                                                               int numPartitions)
Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

Parameters:
other - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

leftOuterJoin

public <W> JavaPairDStream<K,scala.Tuple2<V,com.google.common.base.Optional<W>>> leftOuterJoin(JavaPairDStream<K,W> other,
                                                                                               Partitioner partitioner)
Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

Parameters:
other - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

rightOuterJoin

public <W> JavaPairDStream<K,scala.Tuple2<com.google.common.base.Optional<V>,W>> rightOuterJoin(JavaPairDStream<K,W> other)
Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

Parameters:
other - (undocumented)
Returns:
(undocumented)

rightOuterJoin

public <W> JavaPairDStream<K,scala.Tuple2<com.google.common.base.Optional<V>,W>> rightOuterJoin(JavaPairDStream<K,W> other,
                                                                                                int numPartitions)
Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

Parameters:
other - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

rightOuterJoin

public <W> JavaPairDStream<K,scala.Tuple2<com.google.common.base.Optional<V>,W>> rightOuterJoin(JavaPairDStream<K,W> other,
                                                                                                Partitioner partitioner)
Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

Parameters:
other - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

fullOuterJoin

public <W> JavaPairDStream<K,scala.Tuple2<com.google.common.base.Optional<V>,com.google.common.base.Optional<W>>> fullOuterJoin(JavaPairDStream<K,W> other)
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

Parameters:
other - (undocumented)
Returns:
(undocumented)

fullOuterJoin

public <W> JavaPairDStream<K,scala.Tuple2<com.google.common.base.Optional<V>,com.google.common.base.Optional<W>>> fullOuterJoin(JavaPairDStream<K,W> other,
                                                                                                                                int numPartitions)
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

Parameters:
other - (undocumented)
numPartitions - (undocumented)
Returns:
(undocumented)

fullOuterJoin

public <W> JavaPairDStream<K,scala.Tuple2<com.google.common.base.Optional<V>,com.google.common.base.Optional<W>>> fullOuterJoin(JavaPairDStream<K,W> other,
                                                                                                                                Partitioner partitioner)
Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

Parameters:
other - (undocumented)
partitioner - (undocumented)
Returns:
(undocumented)

saveAsHadoopFiles

public void saveAsHadoopFiles(String prefix,
                              String suffix)
Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

Parameters:
prefix - (undocumented)
suffix - (undocumented)

saveAsHadoopFiles

public <F extends org.apache.hadoop.mapred.OutputFormat<?,?>> void saveAsHadoopFiles(String prefix,
                                                                                     String suffix,
                                                                                     Class<?> keyClass,
                                                                                     Class<?> valueClass,
                                                                                     Class<F> outputFormatClass)
Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

Parameters:
prefix - (undocumented)
suffix - (undocumented)
keyClass - (undocumented)
valueClass - (undocumented)
outputFormatClass - (undocumented)

saveAsHadoopFiles

public <F extends org.apache.hadoop.mapred.OutputFormat<?,?>> void saveAsHadoopFiles(String prefix,
                                                                                     String suffix,
                                                                                     Class<?> keyClass,
                                                                                     Class<?> valueClass,
                                                                                     Class<F> outputFormatClass,
                                                                                     org.apache.hadoop.mapred.JobConf conf)
Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

Parameters:
prefix - (undocumented)
suffix - (undocumented)
keyClass - (undocumented)
valueClass - (undocumented)
outputFormatClass - (undocumented)
conf - (undocumented)

saveAsNewAPIHadoopFiles

public void saveAsNewAPIHadoopFiles(String prefix,
                                    String suffix)
Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

Parameters:
prefix - (undocumented)
suffix - (undocumented)

saveAsNewAPIHadoopFiles

public <F extends org.apache.hadoop.mapreduce.OutputFormat<?,?>> void saveAsNewAPIHadoopFiles(String prefix,
                                                                                              String suffix,
                                                                                              Class<?> keyClass,
                                                                                              Class<?> valueClass,
                                                                                              Class<F> outputFormatClass)
Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

Parameters:
prefix - (undocumented)
suffix - (undocumented)
keyClass - (undocumented)
valueClass - (undocumented)
outputFormatClass - (undocumented)

saveAsNewAPIHadoopFiles

public <F extends org.apache.hadoop.mapreduce.OutputFormat<?,?>> void saveAsNewAPIHadoopFiles(String prefix,
                                                                                              String suffix,
                                                                                              Class<?> keyClass,
                                                                                              Class<?> valueClass,
                                                                                              Class<F> outputFormatClass,
                                                                                              org.apache.hadoop.conf.Configuration conf)
Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

Parameters:
prefix - (undocumented)
suffix - (undocumented)
keyClass - (undocumented)
valueClass - (undocumented)
outputFormatClass - (undocumented)
conf - (undocumented)

toJavaDStream

public JavaDStream<scala.Tuple2<K,V>> toJavaDStream()
Convert to a JavaDStream


classTag

public scala.reflect.ClassTag<scala.Tuple2<K,V>> classTag()