pyspark.sql.functions.regr_syy#
- pyspark.sql.functions.regr_syy(y, x)[source]#
Aggregate function: returns REGR_COUNT(y, x) * VAR_POP(y) for non-null pairs in a group, where y is the dependent variable and x is the independent variable.
New in version 3.5.0.
- Parameters
- Returns
Column
REGR_COUNT(y, x) * VAR_POP(y) for non-null pairs in a group.
Examples
Example 1: All pairs are non-null
>>> import pyspark.sql.functions as sf >>> df = spark.sql("SELECT * FROM VALUES (1, 1), (2, 2), (3, 3), (4, 4) AS tab(y, x)") >>> df.select(sf.regr_syy("y", "x")).show() +--------------+ |regr_syy(y, x)| +--------------+ | 5.0| +--------------+
Example 2: All pairs’ x values are null
>>> import pyspark.sql.functions as sf >>> df = spark.sql("SELECT * FROM VALUES (1, null) AS tab(y, x)") >>> df.select(sf.regr_syy("y", "x")).show() +--------------+ |regr_syy(y, x)| +--------------+ | NULL| +--------------+
Example 3: All pairs’ y values are null
>>> import pyspark.sql.functions as sf >>> df = spark.sql("SELECT * FROM VALUES (null, 1) AS tab(y, x)") >>> df.select(sf.regr_syy("y", "x")).show() +--------------+ |regr_syy(y, x)| +--------------+ | NULL| +--------------+
Example 4: Some pairs’ x values are null
>>> import pyspark.sql.functions as sf >>> df = spark.sql("SELECT * FROM VALUES (1, 1), (2, null), (3, 3), (4, 4) AS tab(y, x)") >>> df.select(sf.regr_syy("y", "x")).show() +-----------------+ | regr_syy(y, x)| +-----------------+ |4.666666666666...| +-----------------+
Example 5: Some pairs’ x or y values are null
>>> import pyspark.sql.functions as sf >>> df = spark.sql("SELECT * FROM VALUES (1, 1), (2, null), (null, 3), (4, 4) AS tab(y, x)") >>> df.select(sf.regr_syy("y", "x")).show() +--------------+ |regr_syy(y, x)| +--------------+ | 4.5| +--------------+