Packages

trait MicroBatchStream extends SparkDataStream

A SparkDataStream for streaming queries with micro-batch mode.

Annotations
@Evolving()
Source
MicroBatchStream.java
Since

3.0.0

Linear Supertypes
SparkDataStream, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. MicroBatchStream
  2. SparkDataStream
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Abstract Value Members

  1. abstract def commit(end: Offset): Unit

    Informs the source that Spark has completed processing all data for offsets less than or equal to end and will only request offsets greater than end in the future.

    Informs the source that Spark has completed processing all data for offsets less than or equal to end and will only request offsets greater than end in the future.

    Definition Classes
    SparkDataStream
  2. abstract def createReaderFactory(): PartitionReaderFactory

    Returns a factory to create a PartitionReader for each InputPartition.

  3. abstract def deserializeOffset(json: String): Offset

    Deserialize a JSON string into an Offset of the implementation-defined offset type.

    Deserialize a JSON string into an Offset of the implementation-defined offset type.

    Definition Classes
    SparkDataStream
    Exceptions thrown

    IllegalArgumentException if the JSON does not encode a valid offset for this reader

  4. abstract def initialOffset(): Offset

    Returns the initial offset for a streaming query to start reading from.

    Returns the initial offset for a streaming query to start reading from. Note that the streaming data source should not assume that it will start reading from its initial offset: if Spark is restarting an existing query, it will restart from the check-pointed offset rather than the initial one.

    Definition Classes
    SparkDataStream
  5. abstract def latestOffset(): Offset

    Returns the most recent offset available.

  6. abstract def planInputPartitions(start: Offset, end: Offset): Array[InputPartition]

    Returns a list of input partitions given the start and end offsets.

    Returns a list of input partitions given the start and end offsets. Each InputPartition represents a data split that can be processed by one Spark task. The number of input partitions returned here is the same as the number of RDD partitions this scan outputs.

    If the Scan supports filter pushdown, this stream is likely configured with a filter and is responsible for creating splits for that filter, which is not a full scan.

    This method will be called multiple times, to launch one Spark job for each micro-batch in this data stream.

  7. abstract def stop(): Unit

    Stop this source and free any resources it has allocated.

    Stop this source and free any resources it has allocated.

    Definition Classes
    SparkDataStream

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
  6. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  7. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  8. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  9. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  10. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  11. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  12. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  13. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  14. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  15. def toString(): String
    Definition Classes
    AnyRef → Any
  16. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  17. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  18. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

    (Since version 9)

Inherited from SparkDataStream

Inherited from AnyRef

Inherited from Any

Ungrouped