pyspark.sql.GroupedData.min

GroupedData.min(*cols: str) → pyspark.sql.dataframe.DataFrame[source]

Computes the min value for each numeric column for each group.

New in version 1.3.0.

Parameters
colsstr

column names. Non-numeric columns are ignored.

Examples

>>> df.groupBy().min('age').collect()
[Row(min(age)=2)]
>>> df3.groupBy().min('age', 'height').collect()
[Row(min(age)=2, min(height)=80)]