Class PairDStreamFunctions<K,V> 
Object
org.apache.spark.streaming.dstream.PairDStreamFunctions<K,V> 
- All Implemented Interfaces:
- Serializable
Extra functions available on DStream of (key, value) pairs through an implicit conversion.
- See Also:
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescription<W> DStream<scala.Tuple2<K,scala.Tuple2<scala.collection.Iterable<V>, scala.collection.Iterable<W>>>> Return a new DStream by applying 'cogroup' between RDDs ofthisDStream andotherDStream.<W> DStream<scala.Tuple2<K,scala.Tuple2<scala.collection.Iterable<V>, scala.collection.Iterable<W>>>> cogroup(DStream<scala.Tuple2<K, W>> other, Partitioner partitioner, scala.reflect.ClassTag<W> evidence$15) Return a new DStream by applying 'cogroup' between RDDs ofthisDStream andotherDStream.<W> DStream<scala.Tuple2<K,scala.Tuple2<scala.collection.Iterable<V>, scala.collection.Iterable<W>>>> Return a new DStream by applying 'cogroup' between RDDs ofthisDStream andotherDStream.combineByKey(scala.Function1<V, C> createCombiner, scala.Function2<C, V, C> mergeValue, scala.Function2<C, C, C> mergeCombiner, Partitioner partitioner, boolean mapSideCombine, scala.reflect.ClassTag<C> evidence$1) Combine elements of each key in DStream's RDDs using custom functions.flatMapValues(scala.Function1<V, scala.collection.IterableOnce<U>> flatMapValuesFunc, scala.reflect.ClassTag<U> evidence$12) Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.fullOuterJoin(DStream<scala.Tuple2<K, W>> other, int numPartitions, scala.reflect.ClassTag<W> evidence$26) Return a new DStream by applying 'full outer join' between RDDs ofthisDStream andotherDStream.fullOuterJoin(DStream<scala.Tuple2<K, W>> other, Partitioner partitioner, scala.reflect.ClassTag<W> evidence$27) Return a new DStream by applying 'full outer join' between RDDs ofthisDStream andotherDStream.fullOuterJoin(DStream<scala.Tuple2<K, W>> other, scala.reflect.ClassTag<W> evidence$25) Return a new DStream by applying 'full outer join' between RDDs ofthisDStream andotherDStream.Return a new DStream by applyinggroupByKeyto each RDD.groupByKey(int numPartitions) Return a new DStream by applyinggroupByKeyto each RDD.groupByKey(Partitioner partitioner) Return a new DStream by applyinggroupByKeyon each RDD.groupByKeyAndWindow(Duration windowDuration) Return a new DStream by applyinggroupByKeyover a sliding window.groupByKeyAndWindow(Duration windowDuration, Duration slideDuration) Return a new DStream by applyinggroupByKeyover a sliding window.groupByKeyAndWindow(Duration windowDuration, Duration slideDuration, int numPartitions) Return a new DStream by applyinggroupByKeyover a sliding window onthisDStream.groupByKeyAndWindow(Duration windowDuration, Duration slideDuration, Partitioner partitioner) Create a new DStream by applyinggroupByKeyover a sliding window onthisDStream.Return a new DStream by applying 'join' between RDDs ofthisDStream andotherDStream.join(DStream<scala.Tuple2<K, W>> other, Partitioner partitioner, scala.reflect.ClassTag<W> evidence$18) Return a new DStream by applying 'join' between RDDs ofthisDStream andotherDStream.Return a new DStream by applying 'join' between RDDs ofthisDStream andotherDStream.leftOuterJoin(DStream<scala.Tuple2<K, W>> other, int numPartitions, scala.reflect.ClassTag<W> evidence$20) Return a new DStream by applying 'left outer join' between RDDs ofthisDStream andotherDStream.leftOuterJoin(DStream<scala.Tuple2<K, W>> other, Partitioner partitioner, scala.reflect.ClassTag<W> evidence$21) Return a new DStream by applying 'left outer join' between RDDs ofthisDStream andotherDStream.leftOuterJoin(DStream<scala.Tuple2<K, W>> other, scala.reflect.ClassTag<W> evidence$19) Return a new DStream by applying 'left outer join' between RDDs ofthisDStream andotherDStream.Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.<StateType,MappedType> 
 MapWithStateDStream<K,V, StateType, MappedType> mapWithState(StateSpec<K, V, StateType, MappedType> spec, scala.reflect.ClassTag<StateType> evidence$2, scala.reflect.ClassTag<MappedType> evidence$3) Return aMapWithStateDStreamby applying a function to every key-value element ofthisstream, while maintaining some state data for each unique key.reduceByKey(scala.Function2<V, V, V> reduceFunc) Return a new DStream by applyingreduceByKeyto each RDD.reduceByKey(scala.Function2<V, V, V> reduceFunc, int numPartitions) Return a new DStream by applyingreduceByKeyto each RDD.reduceByKey(scala.Function2<V, V, V> reduceFunc, Partitioner partitioner) Return a new DStream by applyingreduceByKeyto each RDD.reduceByKeyAndWindow(scala.Function2<V, V, V> reduceFunc, Duration windowDuration) Return a new DStream by applyingreduceByKeyover a sliding window onthisDStream.reduceByKeyAndWindow(scala.Function2<V, V, V> reduceFunc, Duration windowDuration, Duration slideDuration) Return a new DStream by applyingreduceByKeyover a sliding window.reduceByKeyAndWindow(scala.Function2<V, V, V> reduceFunc, Duration windowDuration, Duration slideDuration, int numPartitions) Return a new DStream by applyingreduceByKeyover a sliding window.reduceByKeyAndWindow(scala.Function2<V, V, V> reduceFunc, Duration windowDuration, Duration slideDuration, Partitioner partitioner) Return a new DStream by applyingreduceByKeyover a sliding window.reduceByKeyAndWindow(scala.Function2<V, V, V> reduceFunc, scala.Function2<V, V, V> invReduceFunc, Duration windowDuration, Duration slideDuration, int numPartitions, scala.Function1<scala.Tuple2<K, V>, Object> filterFunc) Return a new DStream by applying incrementalreduceByKeyover a sliding window.reduceByKeyAndWindow(scala.Function2<V, V, V> reduceFunc, scala.Function2<V, V, V> invReduceFunc, Duration windowDuration, Duration slideDuration, Partitioner partitioner, scala.Function1<scala.Tuple2<K, V>, Object> filterFunc) Return a new DStream by applying incrementalreduceByKeyover a sliding window.rightOuterJoin(DStream<scala.Tuple2<K, W>> other, int numPartitions, scala.reflect.ClassTag<W> evidence$23) Return a new DStream by applying 'right outer join' between RDDs ofthisDStream andotherDStream.rightOuterJoin(DStream<scala.Tuple2<K, W>> other, Partitioner partitioner, scala.reflect.ClassTag<W> evidence$24) Return a new DStream by applying 'right outer join' between RDDs ofthisDStream andotherDStream.rightOuterJoin(DStream<scala.Tuple2<K, W>> other, scala.reflect.ClassTag<W> evidence$22) Return a new DStream by applying 'right outer join' between RDDs ofthisDStream andotherDStream.voidsaveAsHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<? extends org.apache.hadoop.mapred.OutputFormat<?, ?>> outputFormatClass, org.apache.hadoop.mapred.JobConf conf) Save each RDD inthisDStream as a Hadoop file.saveAsHadoopFiles(String prefix, String suffix, scala.reflect.ClassTag<F> fm) Save each RDD inthisDStream as a Hadoop file.voidsaveAsNewAPIHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<? extends org.apache.hadoop.mapreduce.OutputFormat<?, ?>> outputFormatClass, org.apache.hadoop.conf.Configuration conf) Save each RDD inthisDStream as a Hadoop file.saveAsNewAPIHadoopFiles(String prefix, String suffix, scala.reflect.ClassTag<F> fm) Save each RDD inthisDStream as a Hadoop file.updateStateByKey(scala.Function1<scala.collection.Iterator<scala.Tuple3<K, scala.collection.immutable.Seq<V>, scala.Option<S>>>, scala.collection.Iterator<scala.Tuple2<K, S>>> updateFunc, Partitioner partitioner, boolean rememberPartitioner, RDD<scala.Tuple2<K, S>> initialRDD, scala.reflect.ClassTag<S> evidence$9) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.updateStateByKey(scala.Function1<scala.collection.Iterator<scala.Tuple3<K, scala.collection.immutable.Seq<V>, scala.Option<S>>>, scala.collection.Iterator<scala.Tuple2<K, S>>> updateFunc, Partitioner partitioner, boolean rememberPartitioner, scala.reflect.ClassTag<S> evidence$7) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.updateStateByKey(scala.Function2<scala.collection.immutable.Seq<V>, scala.Option<S>, scala.Option<S>> updateFunc, int numPartitions, scala.reflect.ClassTag<S> evidence$5) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.updateStateByKey(scala.Function2<scala.collection.immutable.Seq<V>, scala.Option<S>, scala.Option<S>> updateFunc, Partitioner partitioner, RDD<scala.Tuple2<K, S>> initialRDD, scala.reflect.ClassTag<S> evidence$8) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.updateStateByKey(scala.Function2<scala.collection.immutable.Seq<V>, scala.Option<S>, scala.Option<S>> updateFunc, Partitioner partitioner, scala.reflect.ClassTag<S> evidence$6) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.updateStateByKey(scala.Function2<scala.collection.immutable.Seq<V>, scala.Option<S>, scala.Option<S>> updateFunc, scala.reflect.ClassTag<S> evidence$4) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.updateStateByKey(scala.Function4<Time, K, scala.collection.immutable.Seq<V>, scala.Option<S>, scala.Option<S>> updateFunc, Partitioner partitioner, boolean rememberPartitioner, scala.Option<RDD<scala.Tuple2<K, S>>> initialRDD, scala.reflect.ClassTag<S> evidence$10) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
- 
Constructor Details- 
PairDStreamFunctions
 
- 
- 
Method Details- 
cogrouppublic <W> DStream<scala.Tuple2<K,scala.Tuple2<scala.collection.Iterable<V>, cogroupscala.collection.Iterable<W>>>> (DStream<scala.Tuple2<K, W>> other, scala.reflect.ClassTag<W> evidence$13) Return a new DStream by applying 'cogroup' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
- other- (undocumented)
- evidence$13- (undocumented)
- Returns:
- (undocumented)
 
- 
cogrouppublic <W> DStream<scala.Tuple2<K,scala.Tuple2<scala.collection.Iterable<V>, cogroupscala.collection.Iterable<W>>>> (DStream<scala.Tuple2<K, W>> other, int numPartitions, scala.reflect.ClassTag<W> evidence$14) Return a new DStream by applying 'cogroup' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
- other- (undocumented)
- numPartitions- (undocumented)
- evidence$14- (undocumented)
- Returns:
- (undocumented)
 
- 
cogrouppublic <W> DStream<scala.Tuple2<K,scala.Tuple2<scala.collection.Iterable<V>, cogroupscala.collection.Iterable<W>>>> (DStream<scala.Tuple2<K, W>> other, Partitioner partitioner, scala.reflect.ClassTag<W> evidence$15) Return a new DStream by applying 'cogroup' between RDDs ofthisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to partition the generated RDDs.- Parameters:
- other- (undocumented)
- partitioner- (undocumented)
- evidence$15- (undocumented)
- Returns:
- (undocumented)
 
- 
combineByKeypublic <C> DStream<scala.Tuple2<K,C>> combineByKey(scala.Function1<V, C> createCombiner, scala.Function2<C, V, C> mergeValue, scala.Function2<C, C, C> mergeCombiner, Partitioner partitioner, boolean mapSideCombine, scala.reflect.ClassTag<C> evidence$1) Combine elements of each key in DStream's RDDs using custom functions. This is similar to the combineByKey for RDDs. Please refer to combineByKey in org.apache.spark.rdd.PairRDDFunctions in the Spark core documentation for more information.- Parameters:
- createCombiner- (undocumented)
- mergeValue- (undocumented)
- mergeCombiner- (undocumented)
- partitioner- (undocumented)
- mapSideCombine- (undocumented)
- evidence$1- (undocumented)
- Returns:
- (undocumented)
 
- 
flatMapValuespublic <U> DStream<scala.Tuple2<K,U>> flatMapValues(scala.Function1<V, scala.collection.IterableOnce<U>> flatMapValuesFunc, scala.reflect.ClassTag<U> evidence$12) Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.- Parameters:
- flatMapValuesFunc- (undocumented)
- evidence$12- (undocumented)
- Returns:
- (undocumented)
 
- 
fullOuterJoinpublic <W> DStream<scala.Tuple2<K,scala.Tuple2<scala.Option<V>, fullOuterJoinscala.Option<W>>>> (DStream<scala.Tuple2<K, W>> other, scala.reflect.ClassTag<W> evidence$25) Return a new DStream by applying 'full outer join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
- other- (undocumented)
- evidence$25- (undocumented)
- Returns:
- (undocumented)
 
- 
fullOuterJoinpublic <W> DStream<scala.Tuple2<K,scala.Tuple2<scala.Option<V>, fullOuterJoinscala.Option<W>>>> (DStream<scala.Tuple2<K, W>> other, int numPartitions, scala.reflect.ClassTag<W> evidence$26) Return a new DStream by applying 'full outer join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
- other- (undocumented)
- numPartitions- (undocumented)
- evidence$26- (undocumented)
- Returns:
- (undocumented)
 
- 
fullOuterJoinpublic <W> DStream<scala.Tuple2<K,scala.Tuple2<scala.Option<V>, fullOuterJoinscala.Option<W>>>> (DStream<scala.Tuple2<K, W>> other, Partitioner partitioner, scala.reflect.ClassTag<W> evidence$27) Return a new DStream by applying 'full outer join' between RDDs ofthisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
- other- (undocumented)
- partitioner- (undocumented)
- evidence$27- (undocumented)
- Returns:
- (undocumented)
 
- 
groupByKeyReturn a new DStream by applyinggroupByKeyto each RDD. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Returns:
- (undocumented)
 
- 
groupByKeyReturn a new DStream by applyinggroupByKeyto each RDD. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
- numPartitions- (undocumented)
- Returns:
- (undocumented)
 
- 
groupByKeyReturn a new DStream by applyinggroupByKeyon each RDD. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
- partitioner- (undocumented)
- Returns:
- (undocumented)
 
- 
groupByKeyAndWindowpublic DStream<scala.Tuple2<K,scala.collection.Iterable<V>>> groupByKeyAndWindow(Duration windowDuration) Return a new DStream by applyinggroupByKeyover a sliding window. This is similar toDStream.groupByKey()but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
- windowDuration- width of the window; must be a multiple of this DStream's batching interval
- Returns:
- (undocumented)
 
- 
groupByKeyAndWindowpublic DStream<scala.Tuple2<K,scala.collection.Iterable<V>>> groupByKeyAndWindow(Duration windowDuration, Duration slideDuration) Return a new DStream by applyinggroupByKeyover a sliding window. Similar toDStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
- windowDuration- width of the window; must be a multiple of this DStream's batching interval
- slideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- Returns:
- (undocumented)
 
- 
groupByKeyAndWindowpublic DStream<scala.Tuple2<K,scala.collection.Iterable<V>>> groupByKeyAndWindow(Duration windowDuration, Duration slideDuration, int numPartitions) Return a new DStream by applyinggroupByKeyover a sliding window onthisDStream. Similar toDStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
- windowDuration- width of the window; must be a multiple of this DStream's batching interval
- slideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- numPartitions- number of partitions of each RDD in the new DStream; if not specified then Spark's default number of partitions will be used
- Returns:
- (undocumented)
 
- 
groupByKeyAndWindowpublic DStream<scala.Tuple2<K,scala.collection.Iterable<V>>> groupByKeyAndWindow(Duration windowDuration, Duration slideDuration, Partitioner partitioner) Create a new DStream by applyinggroupByKeyover a sliding window onthisDStream. Similar toDStream.groupByKey(), but applies it over a sliding window.- Parameters:
- windowDuration- width of the window; must be a multiple of this DStream's batching interval
- slideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- partitioner- partitioner for controlling the partitioning of each RDD in the new DStream.
- Returns:
- (undocumented)
 
- 
joinpublic <W> DStream<scala.Tuple2<K,scala.Tuple2<V, joinW>>> (DStream<scala.Tuple2<K, W>> other, scala.reflect.ClassTag<W> evidence$16) Return a new DStream by applying 'join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
- other- (undocumented)
- evidence$16- (undocumented)
- Returns:
- (undocumented)
 
- 
joinpublic <W> DStream<scala.Tuple2<K,scala.Tuple2<V, joinW>>> (DStream<scala.Tuple2<K, W>> other, int numPartitions, scala.reflect.ClassTag<W> evidence$17) Return a new DStream by applying 'join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
- other- (undocumented)
- numPartitions- (undocumented)
- evidence$17- (undocumented)
- Returns:
- (undocumented)
 
- 
joinpublic <W> DStream<scala.Tuple2<K,scala.Tuple2<V, joinW>>> (DStream<scala.Tuple2<K, W>> other, Partitioner partitioner, scala.reflect.ClassTag<W> evidence$18) Return a new DStream by applying 'join' between RDDs ofthisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
- other- (undocumented)
- partitioner- (undocumented)
- evidence$18- (undocumented)
- Returns:
- (undocumented)
 
- 
leftOuterJoinpublic <W> DStream<scala.Tuple2<K,scala.Tuple2<V, leftOuterJoinscala.Option<W>>>> (DStream<scala.Tuple2<K, W>> other, scala.reflect.ClassTag<W> evidence$19) Return a new DStream by applying 'left outer join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
- other- (undocumented)
- evidence$19- (undocumented)
- Returns:
- (undocumented)
 
- 
leftOuterJoinpublic <W> DStream<scala.Tuple2<K,scala.Tuple2<V, leftOuterJoinscala.Option<W>>>> (DStream<scala.Tuple2<K, W>> other, int numPartitions, scala.reflect.ClassTag<W> evidence$20) Return a new DStream by applying 'left outer join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
- other- (undocumented)
- numPartitions- (undocumented)
- evidence$20- (undocumented)
- Returns:
- (undocumented)
 
- 
leftOuterJoinpublic <W> DStream<scala.Tuple2<K,scala.Tuple2<V, leftOuterJoinscala.Option<W>>>> (DStream<scala.Tuple2<K, W>> other, Partitioner partitioner, scala.reflect.ClassTag<W> evidence$21) Return a new DStream by applying 'left outer join' between RDDs ofthisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
- other- (undocumented)
- partitioner- (undocumented)
- evidence$21- (undocumented)
- Returns:
- (undocumented)
 
- 
mapValuespublic <U> DStream<scala.Tuple2<K,U>> mapValues(scala.Function1<V, U> mapValuesFunc, scala.reflect.ClassTag<U> evidence$11) Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.- Parameters:
- mapValuesFunc- (undocumented)
- evidence$11- (undocumented)
- Returns:
- (undocumented)
 
- 
mapWithStatepublic <StateType,MappedType> MapWithStateDStream<K,V, mapWithStateStateType, MappedType> (StateSpec<K, V, StateType, MappedType> spec, scala.reflect.ClassTag<StateType> evidence$2, scala.reflect.ClassTag<MappedType> evidence$3) Return aMapWithStateDStreamby applying a function to every key-value element ofthisstream, while maintaining some state data for each unique key. The mapping function and other specification (e.g. partitioners, timeouts, initial state data, etc.) of this transformation can be specified usingStateSpecclass. The state data is accessible in as a parameter of typeStatein the mapping function.Example of using mapWithState:// A mapping function that maintains an integer state and return a String def mappingFunction(key: String, value: Option[Int], state: State[Int]): Option[String] = { // Use state.exists(), state.get(), state.update() and state.remove() // to manage state, and return the necessary string } val spec = StateSpec.function(mappingFunction).numPartitions(10) val mapWithStateDStream = keyValueDStream.mapWithState[StateType, MappedType](spec)- Parameters:
- spec- Specification of this transformation
- evidence$2- (undocumented)
- evidence$3- (undocumented)
- Returns:
- (undocumented)
 
- 
reduceByKeyReturn a new DStream by applyingreduceByKeyto each RDD. The values for each key are merged using the associative and commutative reduce function. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
- reduceFunc- (undocumented)
- Returns:
- (undocumented)
 
- 
reduceByKeyReturn a new DStream by applyingreduceByKeyto each RDD. The values for each key are merged using the supplied reduce function. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
- reduceFunc- (undocumented)
- numPartitions- (undocumented)
- Returns:
- (undocumented)
 
- 
reduceByKeypublic DStream<scala.Tuple2<K,V>> reduceByKey(scala.Function2<V, V, V> reduceFunc, Partitioner partitioner) Return a new DStream by applyingreduceByKeyto each RDD. The values for each key are merged using the supplied reduce function. org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
- reduceFunc- (undocumented)
- partitioner- (undocumented)
- Returns:
- (undocumented)
 
- 
reduceByKeyAndWindowpublic DStream<scala.Tuple2<K,V>> reduceByKeyAndWindow(scala.Function2<V, V, V> reduceFunc, Duration windowDuration) Return a new DStream by applyingreduceByKeyover a sliding window onthisDStream. Similar toDStream.reduceByKey(), but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
- reduceFunc- associative and commutative reduce function
- windowDuration- width of the window; must be a multiple of this DStream's batching interval
- Returns:
- (undocumented)
 
- 
reduceByKeyAndWindowpublic DStream<scala.Tuple2<K,V>> reduceByKeyAndWindow(scala.Function2<V, V, V> reduceFunc, Duration windowDuration, Duration slideDuration) Return a new DStream by applyingreduceByKeyover a sliding window. This is similar toDStream.reduceByKey()but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
- reduceFunc- associative and commutative reduce function
- windowDuration- width of the window; must be a multiple of this DStream's batching interval
- slideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- Returns:
- (undocumented)
 
- 
reduceByKeyAndWindowpublic DStream<scala.Tuple2<K,V>> reduceByKeyAndWindow(scala.Function2<V, V, V> reduceFunc, Duration windowDuration, Duration slideDuration, int numPartitions) Return a new DStream by applyingreduceByKeyover a sliding window. This is similar toDStream.reduceByKey()but applies it over a sliding window. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
- reduceFunc- associative and commutative reduce function
- windowDuration- width of the window; must be a multiple of this DStream's batching interval
- slideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- numPartitions- number of partitions of each RDD in the new DStream.
- Returns:
- (undocumented)
 
- 
reduceByKeyAndWindowpublic DStream<scala.Tuple2<K,V>> reduceByKeyAndWindow(scala.Function2<V, V, V> reduceFunc, Duration windowDuration, Duration slideDuration, Partitioner partitioner) Return a new DStream by applyingreduceByKeyover a sliding window. Similar toDStream.reduceByKey(), but applies it over a sliding window.- Parameters:
- reduceFunc- associative and commutative reduce function
- windowDuration- width of the window; must be a multiple of this DStream's batching interval
- slideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- partitioner- partitioner for controlling the partitioning of each RDD in the new DStream.
- Returns:
- (undocumented)
 
- 
reduceByKeyAndWindowpublic DStream<scala.Tuple2<K,V>> reduceByKeyAndWindow(scala.Function2<V, V, V> reduceFunc, scala.Function2<V, V, V> invReduceFunc, Duration windowDuration, Duration slideDuration, int numPartitions, scala.Function1<scala.Tuple2<K, V>, Object> filterFunc) Return a new DStream by applying incrementalreduceByKeyover a sliding window. The reduced value of over a new window is calculated using the old window's reduced value : 1. reduce the new values that entered the window (e.g., adding new counts)2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient than reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions". Hash partitioning is used to generate the RDDs with Spark's default number of partitions. - Parameters:
- reduceFunc- associative and commutative reduce function
- invReduceFunc- inverse reduce function; such that for all y, invertible x:- invReduceFunc(reduceFunc(x, y), x) = y
- windowDuration- width of the window; must be a multiple of this DStream's batching interval
- slideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- filterFunc- Optional function to filter expired key-value pairs; only pairs that satisfy the function are retained
- numPartitions- (undocumented)
- Returns:
- (undocumented)
 
- 
reduceByKeyAndWindowpublic DStream<scala.Tuple2<K,V>> reduceByKeyAndWindow(scala.Function2<V, V, V> reduceFunc, scala.Function2<V, V, V> invReduceFunc, Duration windowDuration, Duration slideDuration, Partitioner partitioner, scala.Function1<scala.Tuple2<K, V>, Object> filterFunc) Return a new DStream by applying incrementalreduceByKeyover a sliding window. The reduced value of over a new window is calculated using the old window's reduced value : 1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient than reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions".- Parameters:
- reduceFunc- associative and commutative reduce function
- invReduceFunc- inverse reduce function
- windowDuration- width of the window; must be a multiple of this DStream's batching interval
- slideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- partitioner- partitioner for controlling the partitioning of each RDD in the new DStream.
- filterFunc- Optional function to filter expired key-value pairs; only pairs that satisfy the function are retained
- Returns:
- (undocumented)
 
- 
rightOuterJoinpublic <W> DStream<scala.Tuple2<K,scala.Tuple2<scala.Option<V>, rightOuterJoinW>>> (DStream<scala.Tuple2<K, W>> other, scala.reflect.ClassTag<W> evidence$22) Return a new DStream by applying 'right outer join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
- other- (undocumented)
- evidence$22- (undocumented)
- Returns:
- (undocumented)
 
- 
rightOuterJoinpublic <W> DStream<scala.Tuple2<K,scala.Tuple2<scala.Option<V>, rightOuterJoinW>>> (DStream<scala.Tuple2<K, W>> other, int numPartitions, scala.reflect.ClassTag<W> evidence$23) Return a new DStream by applying 'right outer join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
- other- (undocumented)
- numPartitions- (undocumented)
- evidence$23- (undocumented)
- Returns:
- (undocumented)
 
- 
rightOuterJoinpublic <W> DStream<scala.Tuple2<K,scala.Tuple2<scala.Option<V>, rightOuterJoinW>>> (DStream<scala.Tuple2<K, W>> other, Partitioner partitioner, scala.reflect.ClassTag<W> evidence$24) Return a new DStream by applying 'right outer join' between RDDs ofthisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
- other- (undocumented)
- partitioner- (undocumented)
- evidence$24- (undocumented)
- Returns:
- (undocumented)
 
- 
saveAsHadoopFilespublic <F extends org.apache.hadoop.mapred.OutputFormat<K,V>> void saveAsHadoopFiles(String prefix, String suffix, scala.reflect.ClassTag<F> fm) Save each RDD inthisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix"- Parameters:
- prefix- (undocumented)
- suffix- (undocumented)
- fm- (undocumented)
 
- 
saveAsHadoopFilespublic void saveAsHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<? extends org.apache.hadoop.mapred.OutputFormat<?, ?>> outputFormatClass, org.apache.hadoop.mapred.JobConf conf) Save each RDD inthisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix"- Parameters:
- prefix- (undocumented)
- suffix- (undocumented)
- keyClass- (undocumented)
- valueClass- (undocumented)
- outputFormatClass- (undocumented)
- conf- (undocumented)
 
- 
saveAsNewAPIHadoopFilespublic <F extends org.apache.hadoop.mapreduce.OutputFormat<K,V>> void saveAsNewAPIHadoopFiles(String prefix, String suffix, scala.reflect.ClassTag<F> fm) Save each RDD inthisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix".- Parameters:
- prefix- (undocumented)
- suffix- (undocumented)
- fm- (undocumented)
 
- 
saveAsNewAPIHadoopFilespublic void saveAsNewAPIHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<? extends org.apache.hadoop.mapreduce.OutputFormat<?, ?>> outputFormatClass, org.apache.hadoop.conf.Configuration conf) Save each RDD inthisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix".- Parameters:
- prefix- (undocumented)
- suffix- (undocumented)
- keyClass- (undocumented)
- valueClass- (undocumented)
- outputFormatClass- (undocumented)
- conf- (undocumented)
 
- 
updateStateByKeypublic <S> DStream<scala.Tuple2<K,S>> updateStateByKey(scala.Function2<scala.collection.immutable.Seq<V>, scala.Option<S>, scala.Option<S>> updateFunc, scala.reflect.ClassTag<S> evidence$4) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
- updateFunc- State update function. If- thisfunction returns None, then corresponding state key-value pair will be eliminated.
- evidence$4- (undocumented)
- Returns:
- (undocumented)
 
- 
updateStateByKeypublic <S> DStream<scala.Tuple2<K,S>> updateStateByKey(scala.Function2<scala.collection.immutable.Seq<V>, scala.Option<S>, scala.Option<S>> updateFunc, int numPartitions, scala.reflect.ClassTag<S> evidence$5) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
- updateFunc- State update function. If- thisfunction returns None, then corresponding state key-value pair will be eliminated.
- numPartitions- Number of partitions of each RDD in the new DStream.
- evidence$5- (undocumented)
- Returns:
- (undocumented)
 
- 
updateStateByKeypublic <S> DStream<scala.Tuple2<K,S>> updateStateByKey(scala.Function2<scala.collection.immutable.Seq<V>, scala.Option<S>, scala.Option<S>> updateFunc, Partitioner partitioner, scala.reflect.ClassTag<S> evidence$6) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. In every batch the updateFunc will be called for each state even if there are no new values.Partitioneris used to control the partitioning of each RDD.- Parameters:
- updateFunc- State update function. If- thisfunction returns None, then corresponding state key-value pair will be eliminated.
- partitioner- Partitioner for controlling the partitioning of each RDD in the new DStream.
- evidence$6- (undocumented)
- Returns:
- (undocumented)
 
- 
updateStateByKeypublic <S> DStream<scala.Tuple2<K,S>> updateStateByKey(scala.Function1<scala.collection.Iterator<scala.Tuple3<K, scala.collection.immutable.Seq<V>, scala.Option<S>>>, scala.collection.Iterator<scala.Tuple2<K, S>>> updateFunc, Partitioner partitioner, boolean rememberPartitioner, scala.reflect.ClassTag<S> evidence$7) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values.Partitioneris used to control the partitioning of each RDD.- Parameters:
- updateFunc- State update function. Note, that this function may generate a different tuple with a different key than the input key. Therefore keys may be removed or added in this way. It is up to the developer to decide whether to remember the partitioner despite the key being changed.
- partitioner- Partitioner for controlling the partitioning of each RDD in the new DStream
- rememberPartitioner- Whether to remember the partitioner object in the generated RDDs.
- evidence$7- (undocumented)
- Returns:
- (undocumented)
 
- 
updateStateByKeypublic <S> DStream<scala.Tuple2<K,S>> updateStateByKey(scala.Function2<scala.collection.immutable.Seq<V>, scala.Option<S>, scala.Option<S>> updateFunc, Partitioner partitioner, RDD<scala.Tuple2<K, S>> initialRDD, scala.reflect.ClassTag<S> evidence$8) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. In every batch the updateFunc will be called for each state even if there are no new values. org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
- updateFunc- State update function. If- thisfunction returns None, then corresponding state key-value pair will be eliminated.
- partitioner- Partitioner for controlling the partitioning of each RDD in the new DStream.
- initialRDD- initial state value of each key.
- evidence$8- (undocumented)
- Returns:
- (undocumented)
 
- 
updateStateByKeypublic <S> DStream<scala.Tuple2<K,S>> updateStateByKey(scala.Function1<scala.collection.Iterator<scala.Tuple3<K, scala.collection.immutable.Seq<V>, scala.Option<S>>>, scala.collection.Iterator<scala.Tuple2<K, S>>> updateFunc, Partitioner partitioner, boolean rememberPartitioner, RDD<scala.Tuple2<K, S>> initialRDD, scala.reflect.ClassTag<S> evidence$9) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values. org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
- updateFunc- State update function. Note, that this function may generate a different tuple with a different key than the input key. Therefore keys may be removed or added in this way. It is up to the developer to decide whether to remember the partitioner despite the key being changed.
- partitioner- Partitioner for controlling the partitioning of each RDD in the new DStream
- rememberPartitioner- Whether to remember the partitioner object in the generated RDDs.
- initialRDD- initial state value of each key.
- evidence$9- (undocumented)
- Returns:
- (undocumented)
 
- 
updateStateByKeypublic <S> DStream<scala.Tuple2<K,S>> updateStateByKey(scala.Function4<Time, K, scala.collection.immutable.Seq<V>, scala.Option<S>, scala.Option<S>> updateFunc, Partitioner partitioner, boolean rememberPartitioner, scala.Option<RDD<scala.Tuple2<K, S>>> initialRDD, scala.reflect.ClassTag<S> evidence$10) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. In every batch the updateFunc will be called for each state even if there are no new values. org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
- updateFunc- State update function. If- thisfunction returns None, then corresponding state key-value pair will be eliminated.
- partitioner- Partitioner for controlling the partitioning of each RDD in the new DStream.
- rememberPartitioner- (undocumented)
- initialRDD- (undocumented)
- evidence$10- (undocumented)
- Returns:
- (undocumented)
 
 
-