Package org.apache.spark.sql.connector.read
package org.apache.spark.sql.connector.read
-
ClassDescriptionA physical representation of a data source scan for batch queries.A mix-in for input partitions whose records are clustered on the same set of partition keys (provided via
SupportsReportPartitioning
, see below).A serializable representation of an input partition returned byBatch.planInputPartitions()
and the corresponding ones in streaming .A special Scan which will happen on Driver locally instead of Executors.A partition reader returned byPartitionReaderFactory.createReader(InputPartition)
orPartitionReaderFactory.createColumnarReader(InputPartition)
.A factory used to createPartitionReader
instances.A logical representation of a data source scan.This enum defines how the columnar support for the partitions of the data source should be determined.An interface for building theScan
.An interface to represent statistics for a data source, which is returned bySupportsReportStatistics.estimateStatistics()
.A mix-in interface forScanBuilder
.A mix-in interface forScanBuilder
.A mix-in interface forScanBuilder
.A mix-in interface forScanBuilder
.A mix-in interface forScanBuilder
.A mix-in interface forScan
.A mix-in interface forScanBuilder
.A mix-in interface forScanBuilder
.A mix in interface forScan
.A mix in interface forScan
.A mix in interface forScan
.A mix-in interface forScan
.A mix-in interface forScan
.A trait that should be implemented by V1 DataSources that would like to leverage the DataSource V2 read code paths.