Class RelationalGroupedDataset

Object
org.apache.spark.sql.api.RelationalGroupedDataset<Dataset>
org.apache.spark.sql.RelationalGroupedDataset

public class RelationalGroupedDataset extends RelationalGroupedDataset<Dataset>
A set of methods for aggregations on a DataFrame, created by groupBy, cube or rollup (and also pivot).

The main method is the agg function, which has multiple variants. This class also contains some first-order statistics such as mean, sum for convenience.

Since:
2.0.0
Note:
This class was named GroupedData in Spark 1.x.

  • Method Details

    • apply

      public static RelationalGroupedDataset apply(Dataset<Row> df, scala.collection.immutable.Seq<org.apache.spark.sql.catalyst.expressions.Expression> groupingExprs, RelationalGroupedDataset.GroupType groupType)
    • agg

      public Dataset<Row> agg(Column expr, Column... exprs)
      Description copied from class: RelationalGroupedDataset
      Compute aggregates by specifying a series of aggregate columns. Note that this function by default retains the grouping columns in its output. To not retain grouping columns, set spark.sql.retainGroupColumns to false.

      The available aggregate methods are defined in functions.

      
         // Selects the age of the oldest employee and the aggregate expense for each department
      
         // Scala:
         import org.apache.spark.sql.functions._
         df.groupBy("department").agg(max("age"), sum("expense"))
      
         // Java:
         import static org.apache.spark.sql.functions.*;
         df.groupBy("department").agg(max("age"), sum("expense"));
       

      Note that before Spark 1.4, the default behavior is to NOT retain grouping columns. To change to that behavior, set config variable spark.sql.retainGroupColumns to false.

      
         // Scala, 1.3.x:
         df.groupBy("department").agg($"department", max("age"), sum("expense"))
      
         // Java, 1.3.x:
         df.groupBy("department").agg(col("department"), max("age"), sum("expense"));
       

      Overrides:
      agg in class RelationalGroupedDataset<Dataset>
      Parameters:
      expr - (undocumented)
      exprs - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • mean

      public Dataset<Row> mean(String... colNames)
      Description copied from class: RelationalGroupedDataset
      Compute the average value for each numeric columns for each group. This is an alias for avg. The resulting DataFrame will also contain the grouping columns. When specified columns are given, only compute the average values for them.

      Overrides:
      mean in class RelationalGroupedDataset<Dataset>
      Parameters:
      colNames - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • max

      public Dataset<Row> max(String... colNames)
      Description copied from class: RelationalGroupedDataset
      Compute the max value for each numeric columns for each group. The resulting DataFrame will also contain the grouping columns. When specified columns are given, only compute the max values for them.

      Overrides:
      max in class RelationalGroupedDataset<Dataset>
      Parameters:
      colNames - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • avg

      public Dataset<Row> avg(String... colNames)
      Description copied from class: RelationalGroupedDataset
      Compute the mean value for each numeric columns for each group. The resulting DataFrame will also contain the grouping columns. When specified columns are given, only compute the mean values for them.

      Overrides:
      avg in class RelationalGroupedDataset<Dataset>
      Parameters:
      colNames - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • min

      public Dataset<Row> min(String... colNames)
      Description copied from class: RelationalGroupedDataset
      Compute the min value for each numeric column for each group. The resulting DataFrame will also contain the grouping columns. When specified columns are given, only compute the min values for them.

      Overrides:
      min in class RelationalGroupedDataset<Dataset>
      Parameters:
      colNames - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • sum

      public Dataset<Row> sum(String... colNames)
      Description copied from class: RelationalGroupedDataset
      Compute the sum for each numeric columns for each group. The resulting DataFrame will also contain the grouping columns. When specified columns are given, only compute the sum for them.

      Overrides:
      sum in class RelationalGroupedDataset<Dataset>
      Parameters:
      colNames - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • as

      public <K, T> KeyValueGroupedDataset<K,T> as(Encoder<K> evidence$1, Encoder<T> evidence$2)
      Description copied from class: RelationalGroupedDataset
      Returns a KeyValueGroupedDataset where the data is grouped by the grouping expressions of current RelationalGroupedDataset.

      Specified by:
      as in class RelationalGroupedDataset<Dataset>
      Parameters:
      evidence$1 - (undocumented)
      evidence$2 - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • agg

      public Dataset<Row> agg(scala.Tuple2<String,String> aggExpr, scala.collection.immutable.Seq<scala.Tuple2<String,String>> aggExprs)
      Description copied from class: RelationalGroupedDataset
      (Scala-specific) Compute aggregates by specifying the column names and aggregate methods. The resulting DataFrame will also contain the grouping columns.

      The available aggregate methods are avg, max, min, sum, count.

      
         // Selects the age of the oldest employee and the aggregate expense for each department
         df.groupBy("department").agg(
           "age" -> "max",
           "expense" -> "sum"
         )
       

      Overrides:
      agg in class RelationalGroupedDataset<Dataset>
      Parameters:
      aggExpr - (undocumented)
      aggExprs - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • agg

      public Dataset<Row> agg(scala.collection.immutable.Map<String,String> exprs)
      Description copied from class: RelationalGroupedDataset
      (Scala-specific) Compute aggregates by specifying a map from column name to aggregate methods. The resulting DataFrame will also contain the grouping columns.

      The available aggregate methods are avg, max, min, sum, count.

      
         // Selects the age of the oldest employee and the aggregate expense for each department
         df.groupBy("department").agg(Map(
           "age" -> "max",
           "expense" -> "sum"
         ))
       

      Overrides:
      agg in class RelationalGroupedDataset<Dataset>
      Parameters:
      exprs - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • agg

      public Dataset<Row> agg(Map<String,String> exprs)
      Description copied from class: RelationalGroupedDataset
      (Java-specific) Compute aggregates by specifying a map from column name to aggregate methods. The resulting DataFrame will also contain the grouping columns.

      The available aggregate methods are avg, max, min, sum, count.

      
         // Selects the age of the oldest employee and the aggregate expense for each department
         import com.google.common.collect.ImmutableMap;
         df.groupBy("department").agg(ImmutableMap.of("age", "max", "expense", "sum"));
       

      Overrides:
      agg in class RelationalGroupedDataset<Dataset>
      Parameters:
      exprs - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • agg

      public Dataset<Row> agg(Column expr, scala.collection.immutable.Seq<Column> exprs)
      Description copied from class: RelationalGroupedDataset
      Compute aggregates by specifying a series of aggregate columns. Note that this function by default retains the grouping columns in its output. To not retain grouping columns, set spark.sql.retainGroupColumns to false.

      The available aggregate methods are defined in functions.

      
         // Selects the age of the oldest employee and the aggregate expense for each department
      
         // Scala:
         import org.apache.spark.sql.functions._
         df.groupBy("department").agg(max("age"), sum("expense"))
      
         // Java:
         import static org.apache.spark.sql.functions.*;
         df.groupBy("department").agg(max("age"), sum("expense"));
       

      Note that before Spark 1.4, the default behavior is to NOT retain grouping columns. To change to that behavior, set config variable spark.sql.retainGroupColumns to false.

      
         // Scala, 1.3.x:
         df.groupBy("department").agg($"department", max("age"), sum("expense"))
      
         // Java, 1.3.x:
         df.groupBy("department").agg(col("department"), max("age"), sum("expense"));
       

      Overrides:
      agg in class RelationalGroupedDataset<Dataset>
      Parameters:
      expr - (undocumented)
      exprs - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • count

      public Dataset<Row> count()
      Description copied from class: RelationalGroupedDataset
      Count the number of rows for each group. The resulting DataFrame will also contain the grouping columns.

      Overrides:
      count in class RelationalGroupedDataset<Dataset>
      Returns:
      (undocumented)
      Inheritdoc:
    • mean

      public Dataset<Row> mean(scala.collection.immutable.Seq<String> colNames)
      Description copied from class: RelationalGroupedDataset
      Compute the average value for each numeric columns for each group. This is an alias for avg. The resulting DataFrame will also contain the grouping columns. When specified columns are given, only compute the average values for them.

      Overrides:
      mean in class RelationalGroupedDataset<Dataset>
      Parameters:
      colNames - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • max

      public Dataset<Row> max(scala.collection.immutable.Seq<String> colNames)
      Description copied from class: RelationalGroupedDataset
      Compute the max value for each numeric columns for each group. The resulting DataFrame will also contain the grouping columns. When specified columns are given, only compute the max values for them.

      Overrides:
      max in class RelationalGroupedDataset<Dataset>
      Parameters:
      colNames - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • avg

      public Dataset<Row> avg(scala.collection.immutable.Seq<String> colNames)
      Description copied from class: RelationalGroupedDataset
      Compute the mean value for each numeric columns for each group. The resulting DataFrame will also contain the grouping columns. When specified columns are given, only compute the mean values for them.

      Overrides:
      avg in class RelationalGroupedDataset<Dataset>
      Parameters:
      colNames - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • min

      public Dataset<Row> min(scala.collection.immutable.Seq<String> colNames)
      Description copied from class: RelationalGroupedDataset
      Compute the min value for each numeric column for each group. The resulting DataFrame will also contain the grouping columns. When specified columns are given, only compute the min values for them.

      Overrides:
      min in class RelationalGroupedDataset<Dataset>
      Parameters:
      colNames - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • sum

      public Dataset<Row> sum(scala.collection.immutable.Seq<String> colNames)
      Description copied from class: RelationalGroupedDataset
      Compute the sum for each numeric columns for each group. The resulting DataFrame will also contain the grouping columns. When specified columns are given, only compute the sum for them.

      Overrides:
      sum in class RelationalGroupedDataset<Dataset>
      Parameters:
      colNames - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • pivot

      public RelationalGroupedDataset pivot(String pivotColumn)
      Description copied from class: RelationalGroupedDataset
      Pivots a column of the current DataFrame and performs the specified aggregation.

      Spark will eagerly compute the distinct values in pivotColumn so it can determine the resulting schema of the transformation. To avoid any eager computations, provide an explicit list of values via pivot(pivotColumn: String, values: Seq[Any]).

      
         // Compute the sum of earnings for each year by course with each course as a separate column
         df.groupBy("year").pivot("course").sum("earnings")
       

      Overrides:
      pivot in class RelationalGroupedDataset<Dataset>
      Parameters:
      pivotColumn - Name of the column to pivot.
      Returns:
      (undocumented)
      See Also:
      • org.apache.spark.sql.Dataset.unpivot for the reverse operation, except for the aggregation.
      Inheritdoc:
    • pivot

      public RelationalGroupedDataset pivot(String pivotColumn, scala.collection.immutable.Seq<Object> values)
      Description copied from class: RelationalGroupedDataset
      Pivots a column of the current DataFrame and performs the specified aggregation. There are two versions of pivot function: one that requires the caller to specify the list of distinct values to pivot on, and one that does not. The latter is more concise but less efficient, because Spark needs to first compute the list of distinct values internally.

      
         // Compute the sum of earnings for each year by course with each course as a separate column
         df.groupBy("year").pivot("course", Seq("dotNET", "Java")).sum("earnings")
      
         // Or without specifying column values (less efficient)
         df.groupBy("year").pivot("course").sum("earnings")
       

      From Spark 3.0.0, values can be literal columns, for instance, struct. For pivoting by multiple columns, use the struct function to combine the columns and values:

      
         df.groupBy("year")
           .pivot("trainingCourse", Seq(struct(lit("java"), lit("Experts"))))
           .agg(sum($"earnings"))
       

      Overrides:
      pivot in class RelationalGroupedDataset<Dataset>
      Parameters:
      pivotColumn - Name of the column to pivot.
      values - List of values that will be translated to columns in the output DataFrame.
      Returns:
      (undocumented)
      See Also:
      • org.apache.spark.sql.Dataset.unpivot for the reverse operation, except for the aggregation.
      Inheritdoc:
    • pivot

      public RelationalGroupedDataset pivot(String pivotColumn, List<Object> values)
      Description copied from class: RelationalGroupedDataset
      (Java-specific) Pivots a column of the current DataFrame and performs the specified aggregation.

      There are two versions of pivot function: one that requires the caller to specify the list of distinct values to pivot on, and one that does not. The latter is more concise but less efficient, because Spark needs to first compute the list of distinct values internally.

      
         // Compute the sum of earnings for each year by course with each course as a separate column
         df.groupBy("year").pivot("course", Arrays.<Object>asList("dotNET", "Java")).sum("earnings");
      
         // Or without specifying column values (less efficient)
         df.groupBy("year").pivot("course").sum("earnings");
       

      Overrides:
      pivot in class RelationalGroupedDataset<Dataset>
      Parameters:
      pivotColumn - Name of the column to pivot.
      values - List of values that will be translated to columns in the output DataFrame.
      Returns:
      (undocumented)
      See Also:
      • org.apache.spark.sql.Dataset.unpivot for the reverse operation, except for the aggregation.
      Inheritdoc:
    • pivot

      public RelationalGroupedDataset pivot(Column pivotColumn, List<Object> values)
      Description copied from class: RelationalGroupedDataset
      (Java-specific) Pivots a column of the current DataFrame and performs the specified aggregation. This is an overloaded version of the pivot method with pivotColumn of the String type.

      Overrides:
      pivot in class RelationalGroupedDataset<Dataset>
      Parameters:
      pivotColumn - the column to pivot.
      values - List of values that will be translated to columns in the output DataFrame.
      Returns:
      (undocumented)
      See Also:
      • org.apache.spark.sql.Dataset.unpivot for the reverse operation, except for the aggregation.
      Inheritdoc:
    • pivot

      public RelationalGroupedDataset pivot(Column pivotColumn)
      Description copied from class: RelationalGroupedDataset
      Pivots a column of the current DataFrame and performs the specified aggregation.

      Spark will eagerly compute the distinct values in pivotColumn so it can determine the resulting schema of the transformation. To avoid any eager computations, provide an explicit list of values via pivot(pivotColumn: Column, values: Seq[Any]).

      
         // Compute the sum of earnings for each year by course with each course as a separate column
         df.groupBy($"year").pivot($"course").sum($"earnings");
       

      Specified by:
      pivot in class RelationalGroupedDataset<Dataset>
      Parameters:
      pivotColumn - he column to pivot.
      Returns:
      (undocumented)
      See Also:
      • org.apache.spark.sql.Dataset.unpivot for the reverse operation, except for the aggregation.
      Inheritdoc:
    • pivot

      public RelationalGroupedDataset pivot(Column pivotColumn, scala.collection.immutable.Seq<Object> values)
      Description copied from class: RelationalGroupedDataset
      Pivots a column of the current DataFrame and performs the specified aggregation. This is an overloaded version of the pivot method with pivotColumn of the String type.

      
         // Compute the sum of earnings for each year by course with each course as a separate column
         df.groupBy($"year").pivot($"course", Seq("dotNET", "Java")).sum($"earnings")
       

      Specified by:
      pivot in class RelationalGroupedDataset<Dataset>
      Parameters:
      pivotColumn - the column to pivot.
      values - List of values that will be translated to columns in the output DataFrame.
      Returns:
      (undocumented)
      See Also:
      • org.apache.spark.sql.Dataset.unpivot for the reverse operation, except for the aggregation.
      Inheritdoc:
    • toString

      public String toString()
      Overrides:
      toString in class Object