pyspark.sql.Column.between

Column.between(lowerBound: Union[Column, LiteralType, DateTimeLiteral, DecimalLiteral], upperBound: Union[Column, LiteralType, DateTimeLiteral, DecimalLiteral]) → Column[source]

True if the current column is between the lower bound and upper bound, inclusive.

New in version 1.3.0.

Changed in version 3.4.0: Supports Spark Connect.

Parameters
lowerBoundColumn, int, float, string, bool, datetime, date or Decimal

a boolean expression that boundary start, inclusive.

upperBoundColumn, int, float, string, bool, datetime, date or Decimal

a boolean expression that boundary end, inclusive.

Returns
Column

Column of booleans showing whether each element of Column is between left and right (inclusive).

Examples

>>> df = spark.createDataFrame(
...      [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.name, df.age.between(2, 4)).show()
+-----+---------------------------+
| name|((age >= 2) AND (age <= 4))|
+-----+---------------------------+
|Alice|                       true|
|  Bob|                      false|
+-----+---------------------------+