spark.streaming.api.java

JavaPairDStream

class JavaPairDStream[K, V] extends JavaDStreamLike[(K, V), JavaPairDStream[K, V], JavaPairRDD[K, V]]

Linear Supertypes
JavaDStreamLike[(K, V), JavaPairDStream[K, V], JavaPairRDD[K, V]], Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. Hide All
  2. Show all
  1. JavaPairDStream
  2. JavaDStreamLike
  3. Serializable
  4. Serializable
  5. AnyRef
  6. Any
Visibility
  1. Public
  2. All

Instance Constructors

  1. new JavaPairDStream(dstream: DStream[(K, V)])(implicit kManifiest: ClassManifest[K], vManifest: ClassManifest[V])

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def cache(): JavaPairDStream[K, V]

    Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)

  8. def checkpoint(interval: Duration): DStream[(K, V)]

    Enable periodic checkpointing of RDDs of this DStream

    Enable periodic checkpointing of RDDs of this DStream

    interval

    Time interval after which generated RDD will be checkpointed

    Definition Classes
    JavaDStreamLike
  9. val classManifest: ClassManifest[(K, V)]

    Definition Classes
    JavaPairDStreamJavaDStreamLike
  10. def clone(): AnyRef

    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws()
  11. def cogroup[W](other: JavaPairDStream[K, W], partitioner: Partitioner): JavaPairDStream[K, (List[V], List[W])]

    Cogroup this DStream with other DStream.

    Cogroup this DStream with other DStream. For each key k in corresponding RDDs of this or other DStreams, the generated RDD will contains a tuple with the list of values for that key in both RDDs. Partitioner is used to partition each generated RDD.

  12. def cogroup[W](other: JavaPairDStream[K, W]): JavaPairDStream[K, (List[V], List[W])]

    Cogroup this DStream with other DStream.

    Cogroup this DStream with other DStream. For each key k in corresponding RDDs of this or other DStreams, the generated RDD will contains a tuple with the list of values for that key in both RDDs. HashPartitioner is used to partition each generated RDD into default number of partitions.

  13. def combineByKey[C](createCombiner: Function[V, C], mergeValue: Function2[C, V, C], mergeCombiners: Function2[C, C, C], partitioner: Partitioner): JavaPairDStream[K, C]

    Combine elements of each key in DStream's RDDs using custom function.

    Combine elements of each key in DStream's RDDs using custom function. This is similar to the combineByKey for RDDs. Please refer to combineByKey in spark.PairRDDFunctions for more information.

  14. def compute(validTime: Time): JavaPairRDD[K, V]

    Method that generates a RDD for the given Duration

  15. def context(): StreamingContext

    Return the StreamingContext associated with this DStream

    Return the StreamingContext associated with this DStream

    Definition Classes
    JavaDStreamLike
  16. def count(): JavaDStream[Long]

    Return a new DStream in which each RDD has a single element generated by counting each RDD of this DStream.

    Return a new DStream in which each RDD has a single element generated by counting each RDD of this DStream.

    Definition Classes
    JavaDStreamLike
  17. def countByValue(numPartitions: Int): JavaPairDStream[(K, V), Long]

    Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream.

    Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    numPartitions

    number of partitions of each RDD in the new DStream.

    Definition Classes
    JavaDStreamLike
  18. def countByValue(): JavaPairDStream[(K, V), Long]

    Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream.

    Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    Definition Classes
    JavaDStreamLike
  19. def countByValueAndWindow(windowDuration: Duration, slideDuration: Duration, numPartitions: Int): JavaPairDStream[(K, V), Long]

    Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream.

    Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    numPartitions

    number of partitions of each RDD in the new DStream.

    Definition Classes
    JavaDStreamLike
  20. def countByValueAndWindow(windowDuration: Duration, slideDuration: Duration): JavaPairDStream[(K, V), Long]

    Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream.

    Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    Definition Classes
    JavaDStreamLike
  21. def countByWindow(windowDuration: Duration, slideDuration: Duration): JavaDStream[Long]

    Return a new DStream in which each RDD has a single element generated by counting the number of elements in a window over this DStream.

    Return a new DStream in which each RDD has a single element generated by counting the number of elements in a window over this DStream. windowDuration and slideDuration are as defined in the window() operation. This is equivalent to window(windowDuration, slideDuration).count()

    Definition Classes
    JavaDStreamLike
  22. val dstream: DStream[(K, V)]

    Definition Classes
    JavaPairDStreamJavaDStreamLike
  23. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  24. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  25. def filter(f: Function[(K, V), Boolean]): JavaPairDStream[K, V]

    Return a new DStream containing only the elements that satisfy a predicate.

  26. def finalize(): Unit

    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws()
  27. def flatMap[K2, V2](f: PairFlatMapFunction[(K, V), K2, V2]): JavaPairDStream[K2, V2]

    Return a new DStream by applying a function to all elements of this DStream, and then flattening the results

    Return a new DStream by applying a function to all elements of this DStream, and then flattening the results

    Definition Classes
    JavaDStreamLike
  28. def flatMap[U](f: FlatMapFunction[(K, V), U]): JavaDStream[U]

    Return a new DStream by applying a function to all elements of this DStream, and then flattening the results

    Return a new DStream by applying a function to all elements of this DStream, and then flattening the results

    Definition Classes
    JavaDStreamLike
  29. def flatMapValues[U](f: Function[V, Iterable[U]]): JavaPairDStream[K, U]

  30. def foreach(foreachFunc: Function2[JavaPairRDD[K, V], Time, Void]): Unit

    Apply a function to each RDD in this DStream.

    Apply a function to each RDD in this DStream. This is an output operator, so this DStream will be registered as an output stream and therefore materialized.

    Definition Classes
    JavaDStreamLike
  31. def foreach(foreachFunc: Function[JavaPairRDD[K, V], Void]): Unit

    Apply a function to each RDD in this DStream.

    Apply a function to each RDD in this DStream. This is an output operator, so this DStream will be registered as an output stream and therefore materialized.

    Definition Classes
    JavaDStreamLike
  32. final def getClass(): java.lang.Class[_]

    Definition Classes
    AnyRef → Any
  33. def glom(): JavaDStream[List[(K, V)]]

    Return a new DStream in which each RDD is generated by applying glom() to each RDD of this DStream.

    Return a new DStream in which each RDD is generated by applying glom() to each RDD of this DStream. Applying glom() to an RDD coalesces all elements within each partition into an array.

    Definition Classes
    JavaDStreamLike
  34. def groupByKey(partitioner: Partitioner): JavaPairDStream[K, List[V]]

    Return a new DStream by applying groupByKey on each RDD of this DStream.

    Return a new DStream by applying groupByKey on each RDD of this DStream. Therefore, the values for each key in this DStream's RDDs are grouped into a single sequence to generate the RDDs of the new DStream. spark.Partitioner is used to control the partitioning of each RDD.

  35. def groupByKey(numPartitions: Int): JavaPairDStream[K, List[V]]

    Return a new DStream by applying groupByKey to each RDD.

    Return a new DStream by applying groupByKey to each RDD. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  36. def groupByKey(): JavaPairDStream[K, List[V]]

    Return a new DStream by applying groupByKey to each RDD.

    Return a new DStream by applying groupByKey to each RDD. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  37. def groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner): JavaPairDStream[K, List[V]]

    Return a new DStream by applying groupByKey over a sliding window on this DStream.

    Return a new DStream by applying groupByKey over a sliding window on this DStream. Similar to DStream.groupByKey(), but applies it over a sliding window.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    partitioner

    Partitioner for controlling the partitioning of each RDD in the new DStream.

  38. def groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration, numPartitions: Int): JavaPairDStream[K, List[V]]

    Return a new DStream by applying groupByKey over a sliding window on this DStream.

    Return a new DStream by applying groupByKey over a sliding window on this DStream. Similar to DStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    numPartitions

    Number of partitions of each RDD in the new DStream.

  39. def groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration): JavaPairDStream[K, List[V]]

    Return a new DStream by applying groupByKey over a sliding window.

    Return a new DStream by applying groupByKey over a sliding window. Similar to DStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

  40. def groupByKeyAndWindow(windowDuration: Duration): JavaPairDStream[K, List[V]]

    Return a new DStream by applying groupByKey over a sliding window.

    Return a new DStream by applying groupByKey over a sliding window. This is similar to DStream.groupByKey() but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

  41. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  42. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  43. def join[W](other: JavaPairDStream[K, W], partitioner: Partitioner): JavaPairDStream[K, (V, W)]

    Join this DStream with other DStream, that is, each RDD of the new DStream will be generated by joining RDDs from this and other DStream.

    Join this DStream with other DStream, that is, each RDD of the new DStream will be generated by joining RDDs from this and other DStream. Uses the given Partitioner to partition each generated RDD.

  44. def join[W](other: JavaPairDStream[K, W]): JavaPairDStream[K, (V, W)]

    Join this DStream with other DStream.

    Join this DStream with other DStream. HashPartitioner is used to partition each generated RDD into default number of partitions.

  45. implicit val kManifiest: ClassManifest[K]

  46. def map[K2, V2](f: PairFunction[(K, V), K2, V2]): JavaPairDStream[K2, V2]

    Return a new DStream by applying a function to all elements of this DStream.

    Return a new DStream by applying a function to all elements of this DStream.

    Definition Classes
    JavaDStreamLike
  47. def map[R](f: Function[(K, V), R]): JavaDStream[R]

    Return a new DStream by applying a function to all elements of this DStream.

    Return a new DStream by applying a function to all elements of this DStream.

    Definition Classes
    JavaDStreamLike
  48. def mapPartitions[K2, V2](f: PairFlatMapFunction[Iterator[(K, V)], K2, V2]): JavaPairDStream[K2, V2]

    Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs of this DStream.

    Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs of this DStream. Applying mapPartitions() to an RDD applies a function to each partition of the RDD.

    Definition Classes
    JavaDStreamLike
  49. def mapPartitions[U](f: FlatMapFunction[Iterator[(K, V)], U]): JavaDStream[U]

    Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs of this DStream.

    Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs of this DStream. Applying mapPartitions() to an RDD applies a function to each partition of the RDD.

    Definition Classes
    JavaDStreamLike
  50. def mapValues[U](f: Function[V, U]): JavaPairDStream[K, U]

  51. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  52. final def notify(): Unit

    Definition Classes
    AnyRef
  53. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  54. def persist(storageLevel: StorageLevel): JavaPairDStream[K, V]

    Persist the RDDs of this DStream with the given storage level

  55. def persist(): JavaPairDStream[K, V]

    Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)

  56. def print(): Unit

    Print the first ten elements of each RDD generated in this DStream.

    Print the first ten elements of each RDD generated in this DStream. This is an output operator, so this DStream will be registered as an output stream and there materialized.

    Definition Classes
    JavaDStreamLike
  57. def reduce(f: Function2[(K, V), (K, V), (K, V)]): JavaDStream[(K, V)]

    Return a new DStream in which each RDD has a single element generated by reducing each RDD of this DStream.

    Return a new DStream in which each RDD has a single element generated by reducing each RDD of this DStream.

    Definition Classes
    JavaDStreamLike
  58. def reduceByKey(func: Function2[V, V, V], partitioner: Partitioner): JavaPairDStream[K, V]

    Return a new DStream by applying reduceByKey to each RDD.

    Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the supplied reduce function. spark.Partitioner is used to control the partitioning of each RDD.

  59. def reduceByKey(func: Function2[V, V, V], numPartitions: Int): JavaPairDStream[K, V]

    Return a new DStream by applying reduceByKey to each RDD.

    Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the supplied reduce function. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  60. def reduceByKey(func: Function2[V, V, V]): JavaPairDStream[K, V]

    Return a new DStream by applying reduceByKey to each RDD.

    Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the associative reduce function. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  61. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, invReduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner, filterFunc: Function[(K, V), Boolean]): JavaPairDStream[K, V]

    Return a new DStream by applying incremental reduceByKey over a sliding window.

    Return a new DStream by applying incremental reduceByKey over a sliding window. The reduced value of over a new window is calculated using the old window's reduce value :

    1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient that reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions".
    reduceFunc

    associative reduce function

    invReduceFunc

    inverse function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    partitioner

    Partitioner for controlling the partitioning of each RDD in the new DStream.

    filterFunc

    function to filter expired key-value pairs; only pairs that satisfy the function are retained set this to null if you do not want to filter

  62. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, invReduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, numPartitions: Int, filterFunc: Function[(K, V), Boolean]): JavaPairDStream[K, V]

    Return a new DStream by applying incremental reduceByKey over a sliding window.

    Return a new DStream by applying incremental reduceByKey over a sliding window. The reduced value of over a new window is calculated using the old window's reduce value :

    1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient that reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions". Hash partitioning is used to generate the RDDs with numPartitions partitions.
    reduceFunc

    associative reduce function

    invReduceFunc

    inverse function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    numPartitions

    number of partitions of each RDD in the new DStream.

    filterFunc

    function to filter expired key-value pairs; only pairs that satisfy the function are retained set this to null if you do not want to filter

  63. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, invReduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration): JavaPairDStream[K, V]

    Return a new DStream by reducing over a using incremental computation.

    Return a new DStream by reducing over a using incremental computation. The reduced value of over a new window is calculated using the old window's reduce value :

    1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient that reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions". Hash partitioning is used to generate the RDDs with Spark's default number of partitions.
    reduceFunc

    associative reduce function

    invReduceFunc

    inverse function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

  64. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner): JavaPairDStream[K, V]

    Return a new DStream by applying reduceByKey over a sliding window.

    Return a new DStream by applying reduceByKey over a sliding window. Similar to DStream.reduceByKey(), but applies it over a sliding window.

    reduceFunc

    associative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    partitioner

    Partitioner for controlling the partitioning of each RDD in the new DStream.

  65. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, numPartitions: Int): JavaPairDStream[K, V]

    Return a new DStream by applying reduceByKey over a sliding window.

    Return a new DStream by applying reduceByKey over a sliding window. This is similar to DStream.reduceByKey() but applies it over a sliding window. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    reduceFunc

    associative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    numPartitions

    Number of partitions of each RDD in the new DStream.

  66. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration): JavaPairDStream[K, V]

    Return a new DStream by applying reduceByKey over a sliding window.

    Return a new DStream by applying reduceByKey over a sliding window. This is similar to DStream.reduceByKey() but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    reduceFunc

    associative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

  67. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration): JavaPairDStream[K, V]

    Create a new DStream by applying reduceByKey over a sliding window on this DStream.

    Create a new DStream by applying reduceByKey over a sliding window on this DStream. Similar to DStream.reduceByKey(), but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    reduceFunc

    associative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

  68. def reduceByWindow(reduceFunc: Function2[(K, V), (K, V), (K, V)], invReduceFunc: Function2[(K, V), (K, V), (K, V)], windowDuration: Duration, slideDuration: Duration): JavaDStream[(K, V)]

    Return a new DStream in which each RDD has a single element generated by reducing all elements in a sliding window over this DStream.

    Return a new DStream in which each RDD has a single element generated by reducing all elements in a sliding window over this DStream. However, the reduction is done incrementally using the old window's reduced value :

    1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient than reduceByWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions".
    reduceFunc

    associative reduce function

    invReduceFunc

    inverse reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    Definition Classes
    JavaDStreamLike
  69. def reduceByWindow(reduceFunc: ((K, V), (K, V)) ⇒ (K, V), windowDuration: Duration, slideDuration: Duration): DStream[(K, V)]

    Return a new DStream in which each RDD has a single element generated by reducing all elements in a sliding window over this DStream.

    Return a new DStream in which each RDD has a single element generated by reducing all elements in a sliding window over this DStream.

    reduceFunc

    associative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    Definition Classes
    JavaDStreamLike
  70. def saveAsHadoopFiles(prefix: String, suffix: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: org.apache.hadoop.mapred.OutputFormat[_, _]], conf: JobConf): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

  71. def saveAsHadoopFiles(prefix: String, suffix: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: org.apache.hadoop.mapred.OutputFormat[_, _]]): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

  72. def saveAsHadoopFiles[F <: OutputFormat[K, V]](prefix: String, suffix: String): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

  73. def saveAsNewAPIHadoopFiles(prefix: String, suffix: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: org.apache.hadoop.mapreduce.OutputFormat[_, _]], conf: Configuration = new Configuration): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

  74. def saveAsNewAPIHadoopFiles(prefix: String, suffix: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: org.apache.hadoop.mapreduce.OutputFormat[_, _]]): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

  75. def saveAsNewAPIHadoopFiles[F <: OutputFormat[K, V]](prefix: String, suffix: String): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

  76. implicit def scalaIntToJavaLong(in: DStream[Long]): JavaDStream[Long]

    Definition Classes
    JavaDStreamLike
  77. def slice(fromTime: Time, toTime: Time): List[JavaPairRDD[K, V]]

    Return all the RDDs between 'fromDuration' to 'toDuration' (both included)

    Return all the RDDs between 'fromDuration' to 'toDuration' (both included)

    Definition Classes
    JavaDStreamLike
  78. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  79. def toString(): String

    Definition Classes
    AnyRef → Any
  80. def transform[K2, V2](transformFunc: Function2[JavaPairRDD[K, V], Time, JavaPairRDD[K2, V2]]): JavaPairDStream[K2, V2]

    Return a new DStream in which each RDD is generated by applying a function on each RDD of this DStream.

    Return a new DStream in which each RDD is generated by applying a function on each RDD of this DStream.

    Definition Classes
    JavaDStreamLike
  81. def transform[K2, V2](transformFunc: Function[JavaPairRDD[K, V], JavaPairRDD[K2, V2]]): JavaPairDStream[K2, V2]

    Return a new DStream in which each RDD is generated by applying a function on each RDD of this DStream.

    Return a new DStream in which each RDD is generated by applying a function on each RDD of this DStream.

    Definition Classes
    JavaDStreamLike
  82. def transform[U](transformFunc: Function2[JavaPairRDD[K, V], Time, JavaRDD[U]]): JavaDStream[U]

    Return a new DStream in which each RDD is generated by applying a function on each RDD of this DStream.

    Return a new DStream in which each RDD is generated by applying a function on each RDD of this DStream.

    Definition Classes
    JavaDStreamLike
  83. def transform[U](transformFunc: Function[JavaPairRDD[K, V], JavaRDD[U]]): JavaDStream[U]

    Return a new DStream in which each RDD is generated by applying a function on each RDD of this DStream.

    Return a new DStream in which each RDD is generated by applying a function on each RDD of this DStream.

    Definition Classes
    JavaDStreamLike
  84. def union(that: JavaPairDStream[K, V]): JavaPairDStream[K, V]

    Return a new DStream by unifying data of another DStream with this DStream.

    Return a new DStream by unifying data of another DStream with this DStream.

    that

    Another DStream having the same interval (i.e., slideDuration) as this DStream.

  85. def updateStateByKey[S](updateFunc: Function2[List[V], Optional[S], Optional[S]], partitioner: Partitioner)(implicit arg0: ClassManifest[S]): JavaPairDStream[K, S]

    Create a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.

    Create a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. spark.Partitioner is used to control the partitioning of each RDD.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

    partitioner

    Partitioner for controlling the partitioning of each RDD in the new DStream.

  86. def updateStateByKey[S](updateFunc: Function2[List[V], Optional[S], Optional[S]], numPartitions: Int)(implicit arg0: ClassManifest[S]): JavaPairDStream[K, S]

    Create a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.

    Create a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

    numPartitions

    Number of partitions of each RDD in the new DStream.

  87. def updateStateByKey[S](updateFunc: Function2[List[V], Optional[S], Optional[S]]): JavaPairDStream[K, S]

    Create a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.

    Create a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

  88. implicit val vManifest: ClassManifest[V]

  89. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()
  90. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()
  91. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()
  92. def window(windowDuration: Duration, slideDuration: Duration): JavaPairDStream[K, V]

    Return a new DStream which is computed based on windowed batches of this DStream.

    Return a new DStream which is computed based on windowed batches of this DStream.

    windowDuration

    duration (i.e., width) of the window; must be a multiple of this DStream's interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's interval

  93. def window(windowDuration: Duration): JavaPairDStream[K, V]

    Return a new DStream which is computed based on windowed batches of this DStream.

    Return a new DStream which is computed based on windowed batches of this DStream. The new DStream generates RDDs with the same interval as this DStream.

    windowDuration

    width of the window; must be a multiple of this DStream's interval. @return

  94. def wrapRDD(rdd: RDD[(K, V)]): JavaPairRDD[K, V]

    Definition Classes
    JavaPairDStreamJavaDStreamLike

Inherited from JavaDStreamLike[(K, V), JavaPairDStream[K, V], JavaPairRDD[K, V]]

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any