public class SchemaUtils
extends Object
TODO: Merge this file with SchemaUtils
.
Constructor and Description |
---|
SchemaUtils() |
Modifier and Type | Method and Description |
---|---|
static void |
checkColumnNameDuplication(scala.collection.Seq<String> columnNames,
boolean caseSensitiveAnalysis)
Checks if input column names have duplicate identifiers.
|
static void |
checkColumnNameDuplication(scala.collection.Seq<String> columnNames,
scala.Function2<String,String,Object> resolver)
Checks if input column names have duplicate identifiers.
|
static void |
checkSchemaColumnNameDuplication(DataType schema,
boolean caseSensitiveAnalysis)
Checks if an input schema has duplicate column names.
|
static void |
checkSchemaColumnNameDuplication(StructType schema,
scala.Function2<String,String,Object> resolver)
Checks if an input schema has duplicate column names.
|
static void |
checkTransformDuplication(scala.collection.Seq<org.apache.spark.sql.connector.expressions.Transform> transforms,
String checkType,
boolean isCaseSensitive)
Checks if the partitioning transforms are being duplicated or not.
|
static String |
escapeMetaCharacters(String str) |
static scala.collection.Seq<String> |
explodeNestedFieldNames(StructType schema)
Returns all column names in this schema as a flat list.
|
static scala.collection.Seq<Object> |
findColumnPosition(scala.collection.Seq<String> column,
StructType schema,
scala.Function2<String,String,Object> resolver)
Returns the given column's ordinal within the given
schema . |
static scala.collection.Seq<String> |
getColumnName(scala.collection.Seq<Object> position,
StructType schema)
Gets the name of the column in the given position.
|
static scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.NamedExpression> |
restoreOriginalOutputNames(scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.NamedExpression> projectList,
scala.collection.Seq<String> originalNames) |
public static void checkSchemaColumnNameDuplication(DataType schema, boolean caseSensitiveAnalysis)
schema
- schema to checkcaseSensitiveAnalysis
- whether duplication checks should be case sensitive or notpublic static void checkSchemaColumnNameDuplication(StructType schema, scala.Function2<String,String,Object> resolver)
schema
- schema to checkresolver
- resolver used to determine if two identifiers are equalpublic static void checkColumnNameDuplication(scala.collection.Seq<String> columnNames, scala.Function2<String,String,Object> resolver)
columnNames
- column names to checkresolver
- resolver used to determine if two identifiers are equalpublic static void checkColumnNameDuplication(scala.collection.Seq<String> columnNames, boolean caseSensitiveAnalysis)
columnNames
- column names to checkcaseSensitiveAnalysis
- whether duplication checks should be case sensitive or notpublic static scala.collection.Seq<String> explodeNestedFieldNames(StructType schema)
schema
- (undocumented)public static void checkTransformDuplication(scala.collection.Seq<org.apache.spark.sql.connector.expressions.Transform> transforms, String checkType, boolean isCaseSensitive)
transforms
- the schema to check for duplicatescheckType
- contextual information around the check, used in an exception messageisCaseSensitive
- Whether to be case sensitive when comparing column namespublic static scala.collection.Seq<Object> findColumnPosition(scala.collection.Seq<String> column, StructType schema, scala.Function2<String,String,Object> resolver)
schema
. The length of the returned
position will be as long as how nested the column is.
column
- The column to search for in the given struct. If the length of column
is
greater than 1, we expect to enter a nested field.schema
- The current struct we are looking at.resolver
- The resolver to find the column.public static scala.collection.Seq<String> getColumnName(scala.collection.Seq<Object> position, StructType schema)
position
- (undocumented)schema
- (undocumented)public static scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.NamedExpression> restoreOriginalOutputNames(scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.NamedExpression> projectList, scala.collection.Seq<String> originalNames)
public static String escapeMetaCharacters(String str)
str
- The string to be escaped.