Package org.apache.spark.sql
Class TableValuedFunction
Object
org.apache.spark.sql.TableValuedFunction
Interface for invoking table-valued functions in Spark SQL.
 
- Since:
- 4.0.0
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionGets all of the Spark SQL string collations.Creates aDataFramecontaining a new row for each element in the given array or map column.explode_outer(Column collection) Creates aDataFramecontaining a new row for each element in the given array or map column.Creates aDataFramecontaining a new row for each element in the given array of structs.inline_outer(Column input) Creates aDataFramecontaining a new row for each element in the given array of structs.json_tuple(Column input, Column... fields) Creates aDataFramecontaining a new row for a json column according to the given field names.json_tuple(Column input, scala.collection.immutable.Seq<Column> fields) Creates aDataFramecontaining a new row for a json column according to the given field names.posexplode(Column collection) Creates aDataFramecontaining a new row for each element with position in the given array or map column.posexplode_outer(Column collection) Creates aDataFramecontaining a new row for each element with position in the given array or map column.range(long end) Creates aDatasetwith a singleLongTypecolumn namedid, containing elements in a range from 0 toend(exclusive) with step value 1.range(long start, long end) Creates aDatasetwith a singleLongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with step value 1.range(long start, long end, long step) Creates aDatasetwith a singleLongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with a step value.range(long start, long end, long step, int numPartitions) Creates aDatasetwith a singleLongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with a step value, with partition number specified.Gets Spark SQL keywords.Separatescol1, ...,colkintonrows.Separatescol1, ...,colkintonrows.variant_explode(Column input) Separates a variant object/array into multiple rows containing its fields/elements.variant_explode_outer(Column input) Separates a variant object/array into multiple rows containing its fields/elements.
- 
Constructor Details- 
TableValuedFunctionpublic TableValuedFunction()
 
- 
- 
Method Details- 
collationsGets all of the Spark SQL string collations.- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
explodeCreates aDataFramecontaining a new row for each element in the given array or map column. Uses the default column namecolfor elements in the array andkeyandvaluefor elements in the map unless specified otherwise.- Parameters:
- collection- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
explode_outerCreates aDataFramecontaining a new row for each element in the given array or map column. Uses the default column namecolfor elements in the array andkeyandvaluefor elements in the map unless specified otherwise. Unlike explode, if the array/map is null or empty then null is produced.- Parameters:
- collection- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
inlineCreates aDataFramecontaining a new row for each element in the given array of structs.- Parameters:
- input- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
inline_outerCreates aDataFramecontaining a new row for each element in the given array of structs. Unlike inline, if the array is null or empty then null is produced for each nested column.- Parameters:
- input- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
json_tupleCreates aDataFramecontaining a new row for a json column according to the given field names.- Parameters:
- input- (undocumented)
- fields- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
json_tuplepublic abstract Dataset<Row> json_tuple(Column input, scala.collection.immutable.Seq<Column> fields) Creates aDataFramecontaining a new row for a json column according to the given field names.- Parameters:
- input- (undocumented)
- fields- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
posexplodeCreates aDataFramecontaining a new row for each element with position in the given array or map column. Uses the default column nameposfor position, andcolfor elements in the array andkeyandvaluefor elements in the map unless specified otherwise.- Parameters:
- collection- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
posexplode_outerCreates aDataFramecontaining a new row for each element with position in the given array or map column. Uses the default column nameposfor position, andcolfor elements in the array andkeyandvaluefor elements in the map unless specified otherwise. Unlike posexplode, if the array/map is null or empty then the row (null, null) is produced.- Parameters:
- collection- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
rangeCreates aDatasetwith a singleLongTypecolumn namedid, containing elements in a range from 0 toend(exclusive) with step value 1.- Parameters:
- end- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
rangeCreates aDatasetwith a singleLongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with step value 1.- Parameters:
- start- (undocumented)
- end- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
rangeCreates aDatasetwith a singleLongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with a step value.- Parameters:
- start- (undocumented)
- end- (undocumented)
- step- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
rangeCreates aDatasetwith a singleLongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with a step value, with partition number specified.- Parameters:
- start- (undocumented)
- end- (undocumented)
- step- (undocumented)
- numPartitions- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
sql_keywordsGets Spark SQL keywords.- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
stackSeparatescol1, ...,colkintonrows. Uses column names col0, col1, etc. by default unless specified otherwise.- Parameters:
- n- (undocumented)
- fields- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
stackSeparatescol1, ...,colkintonrows. Uses column names col0, col1, etc. by default unless specified otherwise.- Parameters:
- n- (undocumented)
- fields- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
variant_explodeSeparates a variant object/array into multiple rows containing its fields/elements. Its result schema isstruct<pos int, key string, value variant>.posis the position of the field/element in its parent object/array, andvalueis the field/element value.keyis the field name when exploding a variant object, or is NULL when exploding a variant array. It ignores any input that is not a variant array/object, including SQL NULL, variant null, and any other variant values.- Parameters:
- input- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
- 
variant_explode_outerSeparates a variant object/array into multiple rows containing its fields/elements. Its result schema isstruct<pos int, key string, value variant>.posis the position of the field/element in its parent object/array, andvalueis the field/element value.keyis the field name when exploding a variant object, or is NULL when exploding a variant array. Unlike variant_explode, if the given variant is not a variant array/object, including SQL NULL, variant null, and any other variant values, then NULL is produced.- Parameters:
- input- (undocumented)
- Returns:
- (undocumented)
- Since:
- 4.0.0
 
 
-