Class

org.apache.spark.sql.expressions

WindowSpec

Related Doc: package expressions

Permalink

class WindowSpec extends AnyRef

A window specification that defines the partitioning, ordering, and frame boundaries.

Use the static methods in Window to create a WindowSpec.

Annotations
@Stable()
Source
WindowSpec.scala
Since

1.4.0

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. WindowSpec
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  7. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  8. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  9. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  10. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  11. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  12. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  13. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  14. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  15. def orderBy(cols: Column*): WindowSpec

    Permalink

    Defines the ordering columns in a WindowSpec.

    Defines the ordering columns in a WindowSpec.

    Annotations
    @varargs()
    Since

    1.4.0

  16. def orderBy(colName: String, colNames: String*): WindowSpec

    Permalink

    Defines the ordering columns in a WindowSpec.

    Defines the ordering columns in a WindowSpec.

    Annotations
    @varargs()
    Since

    1.4.0

  17. def partitionBy(cols: Column*): WindowSpec

    Permalink

    Defines the partitioning columns in a WindowSpec.

    Defines the partitioning columns in a WindowSpec.

    Annotations
    @varargs()
    Since

    1.4.0

  18. def partitionBy(colName: String, colNames: String*): WindowSpec

    Permalink

    Defines the partitioning columns in a WindowSpec.

    Defines the partitioning columns in a WindowSpec.

    Annotations
    @varargs()
    Since

    1.4.0

  19. def rangeBetween(start: Column, end: Column): WindowSpec

    Permalink

    Defines the frame boundaries, from start (inclusive) to end (inclusive).

    Defines the frame boundaries, from start (inclusive) to end (inclusive).

    Both start and end are relative to the current row. For example, "lit(0)" means "current row", while "lit(-1)" means one off before the current row, and "lit(5)" means the five off after the current row.

    Users should use unboundedPreceding(), unboundedFollowing(), and currentRow() from org.apache.spark.sql.functions to specify special boundary values, literals are not transformed to org.apache.spark.sql.catalyst.expressions.SpecialFrameBoundarys.

    A range-based boundary is based on the actual value of the ORDER BY expression(s). An offset is used to alter the value of the ORDER BY expression, for instance if the current order by expression has a value of 10 and the lower bound offset is -3, the resulting lower bound for the current row will be 10 - 3 = 7. This however puts a number of constraints on the ORDER BY expressions: there can be only one expression and this expression must have a numerical/date/timestamp data type. An exception can be made when the offset is unbounded, because no value modification is needed, in this case multiple and non-numerical/date/timestamp data type ORDER BY expression are allowed.

    import org.apache.spark.sql.expressions.Window
    val df = Seq((1, "a"), (1, "a"), (2, "a"), (1, "b"), (2, "b"), (3, "b"))
      .toDF("id", "category")
    val byCategoryOrderedById =
      Window.partitionBy('category).orderBy('id).rangeBetween(currentRow(), lit(1))
    df.withColumn("sum", sum('id) over byCategoryOrderedById).show()
    
    +---+--------+---+
    | id|category|sum|
    +---+--------+---+
    |  1|       b|  3|
    |  2|       b|  5|
    |  3|       b|  3|
    |  1|       a|  4|
    |  1|       a|  4|
    |  2|       a|  2|
    +---+--------+---+
    start

    boundary start, inclusive. The frame is unbounded if the expression is org.apache.spark.sql.catalyst.expressions.UnboundedPreceding.

    end

    boundary end, inclusive. The frame is unbounded if the expression is org.apache.spark.sql.catalyst.expressions.UnboundedFollowing.

    Since

    2.3.0

  20. def rangeBetween(start: Long, end: Long): WindowSpec

    Permalink

    Defines the frame boundaries, from start (inclusive) to end (inclusive).

    Defines the frame boundaries, from start (inclusive) to end (inclusive).

    Both start and end are relative from the current row. For example, "0" means "current row", while "-1" means one off before the current row, and "5" means the five off after the current row.

    We recommend users use Window.unboundedPreceding, Window.unboundedFollowing, and Window.currentRow to specify special boundary values, rather than using long values directly.

    A range-based boundary is based on the actual value of the ORDER BY expression(s). An offset is used to alter the value of the ORDER BY expression, for instance if the current order by expression has a value of 10 and the lower bound offset is -3, the resulting lower bound for the current row will be 10 - 3 = 7. This however puts a number of constraints on the ORDER BY expressions: there can be only one expression and this expression must have a numerical data type. An exception can be made when the offset is unbounded, because no value modification is needed, in this case multiple and non-numeric ORDER BY expression are allowed.

    import org.apache.spark.sql.expressions.Window
    val df = Seq((1, "a"), (1, "a"), (2, "a"), (1, "b"), (2, "b"), (3, "b"))
      .toDF("id", "category")
    val byCategoryOrderedById =
      Window.partitionBy('category).orderBy('id).rangeBetween(Window.currentRow, 1)
    df.withColumn("sum", sum('id) over byCategoryOrderedById).show()
    
    +---+--------+---+
    | id|category|sum|
    +---+--------+---+
    |  1|       b|  3|
    |  2|       b|  5|
    |  3|       b|  3|
    |  1|       a|  4|
    |  1|       a|  4|
    |  2|       a|  2|
    +---+--------+---+
    start

    boundary start, inclusive. The frame is unbounded if this is the minimum long value (Window.unboundedPreceding).

    end

    boundary end, inclusive. The frame is unbounded if this is the maximum long value (Window.unboundedFollowing).

    Since

    1.4.0

  21. def rowsBetween(start: Long, end: Long): WindowSpec

    Permalink

    Defines the frame boundaries, from start (inclusive) to end (inclusive).

    Defines the frame boundaries, from start (inclusive) to end (inclusive).

    Both start and end are relative positions from the current row. For example, "0" means "current row", while "-1" means the row before the current row, and "5" means the fifth row after the current row.

    We recommend users use Window.unboundedPreceding, Window.unboundedFollowing, and Window.currentRow to specify special boundary values, rather than using integral values directly.

    A row based boundary is based on the position of the row within the partition. An offset indicates the number of rows above or below the current row, the frame for the current row starts or ends. For instance, given a row based sliding frame with a lower bound offset of -1 and a upper bound offset of +2. The frame for row with index 5 would range from index 4 to index 6.

    import org.apache.spark.sql.expressions.Window
    val df = Seq((1, "a"), (1, "a"), (2, "a"), (1, "b"), (2, "b"), (3, "b"))
      .toDF("id", "category")
    val byCategoryOrderedById =
      Window.partitionBy('category).orderBy('id).rowsBetween(Window.currentRow, 1)
    df.withColumn("sum", sum('id) over byCategoryOrderedById).show()
    
    +---+--------+---+
    | id|category|sum|
    +---+--------+---+
    |  1|       b|  3|
    |  2|       b|  5|
    |  3|       b|  3|
    |  1|       a|  2|
    |  1|       a|  3|
    |  2|       a|  2|
    +---+--------+---+
    start

    boundary start, inclusive. The frame is unbounded if this is the minimum long value (Window.unboundedPreceding).

    end

    boundary end, inclusive. The frame is unbounded if this is the maximum long value (Window.unboundedFollowing).

    Since

    1.4.0

  22. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  23. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  24. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  25. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped