Package org.apache.spark.sql.jdbc
Class TeradataDialect
Object
org.apache.spark.sql.jdbc.JdbcDialect
org.apache.spark.sql.jdbc.TeradataDialect
- All Implemented Interfaces:
- Serializable,- org.apache.spark.internal.Logging,- NoLegacyJDBCError,- scala.Equals,- scala.Product
public class TeradataDialect
extends JdbcDialect
implements NoLegacyJDBCError, scala.Product, Serializable
- See Also:
- 
Nested Class SummaryNested classes/interfaces inherited from interface org.apache.spark.internal.Loggingorg.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionabstract static Rapply()booleanCheck if this dialect instance can handle a certain jdbc url.scala.Option<DataType>getCatalystType(int sqlType, String typeName, int size, MetadataBuilder md) Get the custom datatype mapping for the given jdbc meta information.scala.Option<JdbcType>getJDBCType(DataType dt) Retrieve the jdbc / sql type for a given datatype.getLimitClause(Integer limit) Returns the LIMIT clause for the SELECT statementgetTruncateQuery(String table, scala.Option<Object> cascade) The SQL query used to truncate a table.scala.Option<Object>Return Some[true] iffTRUNCATE TABLEcauses cascading default.booleanbooleanisSupportedFunction(String funcName) Returns whether the database supports function.booleanisSyntaxErrorBestEffort(SQLException exception) Attempts to determine if the given SQLException is a SQL syntax error.renameTable(Identifier oldTable, Identifier newTable) Rename an existing table.static StringtoString()Methods inherited from class org.apache.spark.sql.jdbc.JdbcDialectalterTable, beforeFetch, classifyException, classifyException, compileAggregate, compileExpression, compileValue, convertJavaDateToDate, convertJavaTimestampToTimestamp, convertJavaTimestampToTimestampNTZ, convertTimestampNTZToJavaTimestamp, createConnectionFactory, createIndex, createSchema, createTable, dropIndex, dropSchema, dropTable, functions, getAddColumnQuery, getDayTimeIntervalAsMicros, getDeleteColumnQuery, getFullyQualifiedQuotedTableName, getJdbcSQLQueryBuilder, getOffsetClause, getRenameColumnQuery, getSchemaCommentQuery, getSchemaQuery, getTableCommentQuery, getTableExistsQuery, getTableSample, getTruncateQuery, getUpdateColumnNullabilityQuery, getUpdateColumnTypeQuery, getYearMonthIntervalAsMonths, indexExists, insertIntoTable, listIndexes, listSchemas, quoteIdentifier, removeSchemaCommentQuery, renameTable, schemasExists, supportsHint, supportsJoin, supportsLimit, supportsOffset, supportsTableSample, updateExtraColumnMetaMethods inherited from class java.lang.Objectequals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface scala.EqualscanEqual, equalsMethods inherited from interface org.apache.spark.internal.LogginginitializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logBasedOnLevel, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, MDC, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContextMethods inherited from interface org.apache.spark.sql.jdbc.NoLegacyJDBCErrorclassifyExceptionMethods inherited from interface scala.ProductproductArity, productElement, productElementName, productElementNames, productIterator, productPrefix
- 
Constructor Details- 
TeradataDialectpublic TeradataDialect()
 
- 
- 
Method Details- 
applypublic abstract static R apply()
- 
toString
- 
canHandleDescription copied from class:JdbcDialectCheck if this dialect instance can handle a certain jdbc url.- Specified by:
- canHandlein class- JdbcDialect
- Parameters:
- url- the jdbc url.
- Returns:
- True if the dialect can be applied on the given jdbc url.
 
- 
isSupportedFunctionDescription copied from class:JdbcDialectReturns whether the database supports function.- Overrides:
- isSupportedFunctionin class- JdbcDialect
- Parameters:
- funcName- Upper-cased function name
- Returns:
- True if the database supports function.
 
- 
isObjectNotFoundException- Overrides:
- isObjectNotFoundExceptionin class- JdbcDialect
 
- 
getJDBCTypeDescription copied from class:JdbcDialectRetrieve the jdbc / sql type for a given datatype.- Overrides:
- getJDBCTypein class- JdbcDialect
- Parameters:
- dt- The datatype (e.g.- StringType)
- Returns:
- The new JdbcType if there is an override for this DataType
 
- 
isCascadingTruncateTableDescription copied from class:JdbcDialectReturn Some[true] iffTRUNCATE TABLEcauses cascading default. Some[true] : TRUNCATE TABLE causes cascading. Some[false] : TRUNCATE TABLE does not cause cascading. None: The behavior of TRUNCATE TABLE is unknown (default).- Overrides:
- isCascadingTruncateTablein class- JdbcDialect
- Returns:
- (undocumented)
 
- 
isSyntaxErrorBestEffortDescription copied from class:JdbcDialectAttempts to determine if the given SQLException is a SQL syntax error.This check is best-effort: it may not detect all syntax errors across all JDBC dialects. However, if this method returns true, the exception is guaranteed to be a syntax error. This is used to decide whether to wrap the exception in a more appropriate Spark exception. - Overrides:
- isSyntaxErrorBestEffortin class- JdbcDialect
- Parameters:
- exception- (undocumented)
- Returns:
- true if the exception is confidently identified as a syntax error; false otherwise.
 
- 
getTruncateQueryThe SQL query used to truncate a table. Teradata does not support the 'TRUNCATE' syntax that other dialects use. Instead, we need to use a 'DELETE FROM' statement.- Overrides:
- getTruncateQueryin class- JdbcDialect
- Parameters:
- table- The table to truncate.
- cascade- Whether or not to cascade the truncation. Default value is the value of isCascadingTruncateTable(). Teradata does not support cascading a 'DELETE FROM' statement (and as mentioned, does not support 'TRUNCATE' syntax)
- Returns:
- The SQL query to use for truncating a table
 
- 
renameTableDescription copied from class:JdbcDialectRename an existing table.- Overrides:
- renameTablein class- JdbcDialect
- Parameters:
- oldTable- The existing table.
- newTable- New name of the table.
- Returns:
- The SQL statement to use for renaming the table.
 
- 
getLimitClauseDescription copied from class:JdbcDialectReturns the LIMIT clause for the SELECT statement- Overrides:
- getLimitClausein class- JdbcDialect
- Parameters:
- limit- (undocumented)
- Returns:
- (undocumented)
 
- 
getCatalystTypepublic scala.Option<DataType> getCatalystType(int sqlType, String typeName, int size, MetadataBuilder md) Description copied from class:JdbcDialectGet the custom datatype mapping for the given jdbc meta information.Guidelines for mapping database defined timestamps to Spark SQL timestamps: - 
     TIMESTAMP WITHOUT TIME ZONE if preferTimestampNTZ ->
     TimestampNTZType
- 
     TIMESTAMP WITHOUT TIME ZONE if !preferTimestampNTZ ->
     TimestampType(LTZ)
- TIMESTAMP WITH TIME ZONE -> TimestampType(LTZ)
- TIMESTAMP WITH LOCAL TIME ZONE -> TimestampType(LTZ)
- 
     If the TIMESTAMP cannot be distinguished by sqlTypeandtypeName, preferTimestampNTZ is respected for now, but we may need to add another option in the future if necessary.
 - Overrides:
- getCatalystTypein class- JdbcDialect
- Parameters:
- sqlType- Refers to- Typesconstants, or other constants defined by the target database, e.g.- -101is Oracle's TIMESTAMP WITH TIME ZONE type. This value is returned by- ResultSetMetaData.getColumnType(int).
- typeName- The column type name used by the database (e.g. "BIGINT UNSIGNED"). This is sometimes used to determine the target data type when- sqlTypeis not sufficient if multiple database types are conflated into a single id. This value is returned by- ResultSetMetaData.getColumnTypeName(int).
- size- The size of the type, e.g. the maximum precision for numeric types, length for character string, etc. This value is returned by- ResultSetMetaData.getPrecision(int).
- md- Result metadata associated with this type. This contains additional information from- ResultSetMetaDataor user specified options.- 
               isTimestampNTZ: Whether read a TIMESTAMP WITHOUT TIME ZONE value asTimestampNTZTypeor not. This is configured byJDBCOptions.preferTimestampNTZ.
- 
               scale: The length of fractional partResultSetMetaData.getScale(int)
 
- 
               
- Returns:
- An option the actual DataType (subclasses of DataType) or None if the default type mapping should be used.
 
- 
     TIMESTAMP WITHOUT TIME ZONE if preferTimestampNTZ ->
     
 
-