pyspark.sql.functions.try_reflect#
- pyspark.sql.functions.try_reflect(*cols)[source]#
- This is a special version of reflect that performs the same operation, but returns a NULL value instead of raising an error if the invoke method thrown exception. - New in version 4.0.0. - Parameters
- colsColumnor column name
- the first element should be a Column representing literal string for the class name, and the second element should be a Column representing literal string for the method name, and the remaining are input arguments (Columns or column names) to the Java method. 
 
- cols
 - Examples - Example 1: Reflecting a method call with arguments - >>> from pyspark.sql import functions as sf >>> df = spark.createDataFrame([("a5cf6c42-0c85-418f-af6c-3e4e5b1328f2",)], ["a"]) >>> df.select( ... sf.try_reflect(sf.lit("java.util.UUID"), sf.lit("fromString"), "a") ... ).show(truncate=False) +------------------------------------------+ |try_reflect(java.util.UUID, fromString, a)| +------------------------------------------+ |a5cf6c42-0c85-418f-af6c-3e4e5b1328f2 | +------------------------------------------+ - Example 2: Exception in the reflection call, resulting in null - >>> from pyspark.sql import functions as sf >>> spark.range(1).select( ... sf.try_reflect(sf.lit("scala.Predef"), sf.lit("require"), sf.lit(False)) ... ).show(truncate=False) +-----------------------------------------+ |try_reflect(scala.Predef, require, false)| +-----------------------------------------+ |NULL | +-----------------------------------------+