@Evolving public interface SupportsPushDownAggregates extends ScanBuilder
ScanBuilder
. Data sources can implement this interface to
push down aggregates. Spark assumes that the data source can't fully complete the
grouping work, and will group the data source output again. For queries like
"SELECT min(value) AS m FROM t GROUP BY key", after pushing down the aggregate
to the data source, the data source can still output data with duplicated keys, which is OK
as Spark will do GROUP BY key again. The final query plan can be something like this:
Aggregate [key#1], [min(min(value)#2) AS m#3] +- RelationV2[key#1, min(value)#2]Similarly, if there is no grouping expression, the data source can still output more than one rows.
When pushing down operators, Spark pushes down filters to the data source first, then push down aggregates or apply column pruning. Depends on data source implementation, aggregates may or may not be able to be pushed down with filters. If pushed filters still need to be evaluated after scanning, aggregates can't be pushed down.
Modifier and Type | Method and Description |
---|---|
boolean |
pushAggregation(Aggregation aggregation)
Pushes down Aggregation to datasource.
|
build
boolean pushAggregation(Aggregation aggregation)