Interface ContinuousStream
- All Superinterfaces:
- SparkDataStream
A 
SparkDataStream for streaming queries with continuous mode.- Since:
- 3.0.0
- 
Method SummaryModifier and TypeMethodDescriptionReturns a factory to create aContinuousPartitionReaderfor eachInputPartition.mergeOffsets(PartitionOffset[] offsets) Merge partitioned offsets coming fromContinuousPartitionReaderinstances for each partition to a single global offset.default booleanThe execution engine will call this method in every epoch to determine if new input partitions need to be generated, which may be required if for example the underlying source system has had partitions added or removed.planInputPartitions(Offset start) Returns a list ofinput partitionsgiven the start offset.Methods inherited from interface org.apache.spark.sql.connector.read.streaming.SparkDataStreamcommit, deserializeOffset, initialOffset, stop
- 
Method Details- 
planInputPartitionsReturns a list ofinput partitionsgiven the start offset. EachInputPartitionrepresents a data split that can be processed by one Spark task. The number of input partitions returned here is the same as the number of RDD partitions this scan outputs.If the Scansupports filter pushdown, this stream is likely configured with a filter and is responsible for creating splits for that filter, which is not a full scan.This method will be called to launch one Spark job for reading the data stream. It will be called more than once, if needsReconfiguration()returns true and Spark needs to launch a new job.
- 
createContinuousReaderFactoryContinuousPartitionReaderFactory createContinuousReaderFactory()Returns a factory to create aContinuousPartitionReaderfor eachInputPartition.
- 
mergeOffsetsMerge partitioned offsets coming fromContinuousPartitionReaderinstances for each partition to a single global offset.
- 
needsReconfigurationdefault boolean needsReconfiguration()The execution engine will call this method in every epoch to determine if new input partitions need to be generated, which may be required if for example the underlying source system has had partitions added or removed.If true, the Spark job to scan this continuous data stream will be interrupted and Spark will launch it again with a new list of input partitions.
 
-