org.apache.spark.streaming.dstream

PairDStreamFunctions

class PairDStreamFunctions[K, V] extends Serializable

Extra functions available on DStream of (key, value) pairs through an implicit conversion. Import org.apache.spark.streaming.StreamingContext._ at the top of your program to use these functions.

Linear Supertypes
Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. PairDStreamFunctions
  2. Serializable
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new PairDStreamFunctions(self: DStream[(K, V)])(implicit kt: ClassTag[K], vt: ClassTag[V], ord: Ordering[K])

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. def cogroup[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (Iterable[V], Iterable[W]))]

    Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to partition the generated RDDs.

  9. def cogroup[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (Iterable[V], Iterable[W]))]

    Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  10. def cogroup[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (Iterable[V], Iterable[W]))]

    Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  11. def combineByKey[C](createCombiner: (V) ⇒ C, mergeValue: (C, V) ⇒ C, mergeCombiner: (C, C) ⇒ C, partitioner: Partitioner, mapSideCombine: Boolean = true)(implicit arg0: ClassTag[C]): DStream[(K, C)]

    Combine elements of each key in DStream's RDDs using custom functions.

    Combine elements of each key in DStream's RDDs using custom functions. This is similar to the combineByKey for RDDs. Please refer to combineByKey in org.apache.spark.rdd.PairRDDFunctions in the Spark core documentation for more information.

  12. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  13. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  14. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  15. def flatMapValues[U](flatMapValuesFunc: (V) ⇒ TraversableOnce[U])(implicit arg0: ClassTag[U]): DStream[(K, U)]

    Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.

  16. def fullOuterJoin[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], Option[W]))]

    Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

  17. def fullOuterJoin[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], Option[W]))]

    Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  18. def fullOuterJoin[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], Option[W]))]

    Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  19. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  20. def groupByKey(partitioner: Partitioner): DStream[(K, Iterable[V])]

    Return a new DStream by applying groupByKey on each RDD.

    Return a new DStream by applying groupByKey on each RDD. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

  21. def groupByKey(numPartitions: Int): DStream[(K, Iterable[V])]

    Return a new DStream by applying groupByKey to each RDD.

    Return a new DStream by applying groupByKey to each RDD. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  22. def groupByKey(): DStream[(K, Iterable[V])]

    Return a new DStream by applying groupByKey to each RDD.

    Return a new DStream by applying groupByKey to each RDD. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  23. def groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner): DStream[(K, Iterable[V])]

    Create a new DStream by applying groupByKey over a sliding window on this DStream.

    Create a new DStream by applying groupByKey over a sliding window on this DStream. Similar to DStream.groupByKey(), but applies it over a sliding window.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    partitioner

    partitioner for controlling the partitioning of each RDD in the new DStream.

  24. def groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration, numPartitions: Int): DStream[(K, Iterable[V])]

    Return a new DStream by applying groupByKey over a sliding window on this DStream.

    Return a new DStream by applying groupByKey over a sliding window on this DStream. Similar to DStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    numPartitions

    number of partitions of each RDD in the new DStream; if not specified then Spark's default number of partitions will be used

  25. def groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration): DStream[(K, Iterable[V])]

    Return a new DStream by applying groupByKey over a sliding window.

    Return a new DStream by applying groupByKey over a sliding window. Similar to DStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

  26. def groupByKeyAndWindow(windowDuration: Duration): DStream[(K, Iterable[V])]

    Return a new DStream by applying groupByKey over a sliding window.

    Return a new DStream by applying groupByKey over a sliding window. This is similar to DStream.groupByKey() but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

  27. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  28. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  29. def join[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (V, W))]

    Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'join' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

  30. def join[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (V, W))]

    Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  31. def join[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (V, W))]

    Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  32. def leftOuterJoin[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (V, Option[W]))]

    Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

  33. def leftOuterJoin[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (V, Option[W]))]

    Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  34. def leftOuterJoin[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (V, Option[W]))]

    Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  35. def mapValues[U](mapValuesFunc: (V) ⇒ U)(implicit arg0: ClassTag[U]): DStream[(K, U)]

    Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.

  36. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  37. final def notify(): Unit

    Definition Classes
    AnyRef
  38. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  39. def reduceByKey(reduceFunc: (V, V) ⇒ V, partitioner: Partitioner): DStream[(K, V)]

    Return a new DStream by applying reduceByKey to each RDD.

    Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the supplied reduce function. org.apache.spark.Partitioner is used to control the partitioning of each RDD.

  40. def reduceByKey(reduceFunc: (V, V) ⇒ V, numPartitions: Int): DStream[(K, V)]

    Return a new DStream by applying reduceByKey to each RDD.

    Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the supplied reduce function. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  41. def reduceByKey(reduceFunc: (V, V) ⇒ V): DStream[(K, V)]

    Return a new DStream by applying reduceByKey to each RDD.

    Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the associative reduce function. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  42. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, invReduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner, filterFunc: ((K, V)) ⇒ Boolean): DStream[(K, V)]

    Return a new DStream by applying incremental reduceByKey over a sliding window.

    Return a new DStream by applying incremental reduceByKey over a sliding window. The reduced value of over a new window is calculated using the old window's reduced value :

    1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient than reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions".
    reduceFunc

    associative reduce function

    invReduceFunc

    inverse reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    partitioner

    partitioner for controlling the partitioning of each RDD in the new DStream.

    filterFunc

    Optional function to filter expired key-value pairs; only pairs that satisfy the function are retained

  43. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, invReduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration = self.slideDuration, numPartitions: Int = ssc.sc.defaultParallelism, filterFunc: ((K, V)) ⇒ Boolean = null): DStream[(K, V)]

    Return a new DStream by applying incremental reduceByKey over a sliding window.

    Return a new DStream by applying incremental reduceByKey over a sliding window. The reduced value of over a new window is calculated using the old window's reduced value :

    1. reduce the new values that entered the window (e.g., adding new counts)

    2. "inverse reduce" the old values that left the window (e.g., subtracting old counts)

    This is more efficient than reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions". Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    reduceFunc

    associative reduce function

    invReduceFunc

    inverse reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    filterFunc

    Optional function to filter expired key-value pairs; only pairs that satisfy the function are retained

  44. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner): DStream[(K, V)]

    Return a new DStream by applying reduceByKey over a sliding window.

    Return a new DStream by applying reduceByKey over a sliding window. Similar to DStream.reduceByKey(), but applies it over a sliding window.

    reduceFunc

    associative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    partitioner

    partitioner for controlling the partitioning of each RDD in the new DStream.

  45. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, numPartitions: Int): DStream[(K, V)]

    Return a new DStream by applying reduceByKey over a sliding window.

    Return a new DStream by applying reduceByKey over a sliding window. This is similar to DStream.reduceByKey() but applies it over a sliding window. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    reduceFunc

    associative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    numPartitions

    number of partitions of each RDD in the new DStream.

  46. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration): DStream[(K, V)]

    Return a new DStream by applying reduceByKey over a sliding window.

    Return a new DStream by applying reduceByKey over a sliding window. This is similar to DStream.reduceByKey() but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    reduceFunc

    associative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

  47. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration): DStream[(K, V)]

    Return a new DStream by applying reduceByKey over a sliding window on this DStream.

    Return a new DStream by applying reduceByKey over a sliding window on this DStream. Similar to DStream.reduceByKey(), but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    reduceFunc

    associative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

  48. def rightOuterJoin[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], W))]

    Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

  49. def rightOuterJoin[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], W))]

    Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  50. def rightOuterJoin[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], W))]

    Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  51. def saveAsHadoopFiles(prefix: String, suffix: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: OutputFormat[_, _]], conf: JobConf = ...): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix"

  52. def saveAsHadoopFiles[F <: OutputFormat[K, V]](prefix: String, suffix: String)(implicit fm: ClassTag[F]): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix"

  53. def saveAsNewAPIHadoopFiles(prefix: String, suffix: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: OutputFormat[_, _]], conf: Configuration = ...): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

  54. def saveAsNewAPIHadoopFiles[F <: OutputFormat[K, V]](prefix: String, suffix: String)(implicit fm: ClassTag[F]): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

  55. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  56. def toString(): String

    Definition Classes
    AnyRef → Any
  57. def updateStateByKey[S](updateFunc: (Iterator[(K, Seq[V], Option[S])]) ⇒ Iterator[(K, S)], partitioner: Partitioner, rememberPartitioner: Boolean)(implicit arg0: ClassTag[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. org.apache.spark.Partitioner is used to control the partitioning of each RDD.

    S

    State type

    updateFunc

    State update function. Note, that this function may generate a different tuple with a different key than the input key. Therefore keys may be removed or added in this way. It is up to the developer to decide whether to remember the partitioner despite the key being changed.

    partitioner

    Partitioner for controlling the partitioning of each RDD in the new DStream

    rememberPartitioner

    Whether to remember the paritioner object in the generated RDDs.

  58. def updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S], partitioner: Partitioner)(implicit arg0: ClassTag[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. org.apache.spark.Partitioner is used to control the partitioning of each RDD.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

    partitioner

    Partitioner for controlling the partitioning of each RDD in the new DStream.

  59. def updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S], numPartitions: Int)(implicit arg0: ClassTag[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

    numPartitions

    Number of partitions of each RDD in the new DStream.

  60. def updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S])(implicit arg0: ClassTag[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

  61. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  62. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  63. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped