pyspark.sql.DataFrame.cube

DataFrame.cube(*cols)[source]

Create a multi-dimensional cube for the current DataFrame using the specified columns, so we can run aggregations on them.

New in version 1.4.0.

Examples

>>> df.cube("name", df.age).count().orderBy("name", "age").show()
+-----+----+-----+
| name| age|count|
+-----+----+-----+
| null|null|    2|
| null|   2|    1|
| null|   5|    1|
|Alice|null|    1|
|Alice|   2|    1|
|  Bob|null|    1|
|  Bob|   5|    1|
+-----+----+-----+