@Evolving public interface SupportsPushDownAggregates extends ScanBuilder
ScanBuilder. Data sources can implement this interface to
 push down aggregates.
 
 If the data source can't fully complete the grouping work, then
 supportCompletePushDown(Aggregation) should return false, and Spark will group the data
 source output again. For queries like "SELECT min(value) AS m FROM t GROUP BY key", after
 pushing down the aggregate to the data source, the data source can still output data with
 duplicated keys, which is OK as Spark will do GROUP BY key again. The final query plan can be
 something like this:
 
   Aggregate [key#1], [min(min_value#2) AS m#3]
     +- RelationV2[key#1, min_value#2]
 
 Similarly, if there is no grouping expression, the data source can still output more than one
 rows.
 When pushing down operators, Spark pushes down filters to the data source first, then push down aggregates or apply column pruning. Depends on data source implementation, aggregates may or may not be able to be pushed down with filters. If pushed filters still need to be evaluated after scanning, aggregates can't be pushed down.
| Modifier and Type | Method and Description | 
|---|---|
| boolean | pushAggregation(Aggregation aggregation)Pushes down Aggregation to datasource. | 
| default boolean | supportCompletePushDown(Aggregation aggregation)Whether the datasource support complete aggregation push-down. | 
builddefault boolean supportCompletePushDown(Aggregation aggregation)
aggregation - Aggregation in SQL statement.boolean pushAggregation(Aggregation aggregation)
aggregation - Aggregation in SQL statement.