Interface MicroBatchStream
- All Superinterfaces:
- SparkDataStream
A 
SparkDataStream for streaming queries with micro-batch mode.- Since:
- 3.0.0
- 
Method SummaryModifier and TypeMethodDescriptionReturns a factory to create aPartitionReaderfor eachInputPartition.Returns the most recent offset available.planInputPartitions(Offset start, Offset end) Returns a list ofinput partitionsgiven the start and end offsets.Methods inherited from interface org.apache.spark.sql.connector.read.streaming.SparkDataStreamcommit, deserializeOffset, initialOffset, stop
- 
Method Details- 
latestOffsetOffset latestOffset()Returns the most recent offset available.
- 
planInputPartitionsReturns a list ofinput partitionsgiven the start and end offsets. EachInputPartitionrepresents a data split that can be processed by one Spark task. The number of input partitions returned here is the same as the number of RDD partitions this scan outputs.If the Scansupports filter pushdown, this stream is likely configured with a filter and is responsible for creating splits for that filter, which is not a full scan.This method will be called multiple times, to launch one Spark job for each micro-batch in this data stream. 
- 
createReaderFactoryPartitionReaderFactory createReaderFactory()Returns a factory to create aPartitionReaderfor eachInputPartition.
 
-