Package org.apache.spark.sql.jdbc
Class OracleDialect
Object
org.apache.spark.sql.jdbc.JdbcDialect
org.apache.spark.sql.jdbc.OracleDialect
- All Implemented Interfaces:
Serializable,org.apache.spark.internal.Logging,org.apache.spark.sql.catalyst.SQLConfHelper,NoLegacyJDBCError,scala.Equals,scala.Product
public class OracleDialect
extends JdbcDialect
implements org.apache.spark.sql.catalyst.SQLConfHelper, NoLegacyJDBCError, scala.Product, Serializable
- See Also:
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionclassclassNested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionstatic final intstatic final intbooleanCheck if this dialect instance can handle a certain jdbc url.classifyException(Throwable e, String condition, scala.collection.immutable.Map<String, String> messageParameters, String description, boolean isRuntime) Gets a dialect exception, classifies it and wraps it byAnalysisException.scala.Option<String>compileExpression(Expression expr) Converts V2 expression to String representing a SQL expression.compileValue(Object value) Converts value to SQL expression.getAddColumnQuery(String tableName, String columnName, String dataType) scala.Option<DataType>getCatalystType(int sqlType, String typeName, int size, MetadataBuilder md) Get the custom datatype mapping for the given jdbc meta information.getJdbcSQLQueryBuilder(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Returns the SQL builder for the SELECT statement.scala.Option<JdbcType>getJDBCType(DataType dt) Retrieve the jdbc / sql type for a given datatype.getLimitClause(Integer limit) Returns the LIMIT clause for the SELECT statementgetOffsetClause(Integer offset) Returns the OFFSET clause for the SELECT statementgetTruncateQuery(String table, scala.Option<Object> cascade) The SQL query used to truncate a table.getUpdateColumnNullabilityQuery(String tableName, String columnName, boolean isNullable) getUpdateColumnTypeQuery(String tableName, String columnName, String newDataType) static final intstatic final intscala.Option<Object>Return Some[true] iffTRUNCATE TABLEcauses cascading default.booleanisSupportedFunction(String funcName) Returns whether the database supports function.booleanbooleanReturns ture if dialect supports LIMIT clause.booleanReturns ture if dialect supports OFFSET clause.static final intstatic final intMethods inherited from class org.apache.spark.sql.jdbc.JdbcDialect
alterTable, beforeFetch, classifyException, compileAggregate, convertJavaDateToDate, convertJavaTimestampToTimestamp, convertJavaTimestampToTimestampNTZ, convertTimestampNTZToJavaTimestamp, createConnectionFactory, createIndex, createSchema, createTable, dropIndex, dropSchema, dropTable, functions, getDayTimeIntervalAsMicros, getDeleteColumnQuery, getFullyQualifiedQuotedTableName, getRenameColumnQuery, getSchemaCommentQuery, getSchemaQuery, getTableCommentQuery, getTableExistsQuery, getTableSample, getTruncateQuery, getYearMonthIntervalAsMonths, indexExists, insertIntoTable, listIndexes, listSchemas, quoteIdentifier, removeSchemaCommentQuery, renameTable, renameTable, schemasExists, supportsTableSample, updateExtraColumnMetaMethods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface scala.Equals
canEqual, equalsMethods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContextMethods inherited from interface scala.Product
productArity, productElement, productElementName, productElementNames, productIterator, productPrefixMethods inherited from interface org.apache.spark.sql.catalyst.SQLConfHelper
conf, withSQLConf
-
Constructor Details
-
OracleDialect
public OracleDialect()
-
-
Method Details
-
BINARY_FLOAT
public static final int BINARY_FLOAT() -
BINARY_DOUBLE
public static final int BINARY_DOUBLE() -
TIMESTAMP_TZ
public static final int TIMESTAMP_TZ() -
TIMESTAMP_LTZ
public static final int TIMESTAMP_LTZ() -
INTERVAL_YM
public static final int INTERVAL_YM() -
INTERVAL_DS
public static final int INTERVAL_DS() -
canHandle
Description copied from class:JdbcDialectCheck if this dialect instance can handle a certain jdbc url.- Specified by:
canHandlein classJdbcDialect- Parameters:
url- the jdbc url.- Returns:
- True if the dialect can be applied on the given jdbc url.
-
isSupportedFunction
Description copied from class:JdbcDialectReturns whether the database supports function.- Overrides:
isSupportedFunctionin classJdbcDialect- Parameters:
funcName- Upper-cased function name- Returns:
- True if the database supports function.
-
compileExpression
Description copied from class:JdbcDialectConverts V2 expression to String representing a SQL expression.- Overrides:
compileExpressionin classJdbcDialect- Parameters:
expr- The V2 expression to be converted.- Returns:
- Converted value.
-
getCatalystType
public scala.Option<DataType> getCatalystType(int sqlType, String typeName, int size, MetadataBuilder md) Description copied from class:JdbcDialectGet the custom datatype mapping for the given jdbc meta information.Guidelines for mapping database defined timestamps to Spark SQL timestamps:
-
TIMESTAMP WITHOUT TIME ZONE if preferTimestampNTZ ->
TimestampNTZType -
TIMESTAMP WITHOUT TIME ZONE if !preferTimestampNTZ ->
TimestampType(LTZ) - TIMESTAMP WITH TIME ZONE ->
TimestampType(LTZ) - TIMESTAMP WITH LOCAL TIME ZONE ->
TimestampType(LTZ) -
If the TIMESTAMP cannot be distinguished by
sqlTypeandtypeName, preferTimestampNTZ is respected for now, but we may need to add another option in the future if necessary.
- Overrides:
getCatalystTypein classJdbcDialect- Parameters:
sqlType- Refers toTypesconstants, or other constants defined by the target database, e.g.-101is Oracle's TIMESTAMP WITH TIME ZONE type. This value is returned byResultSetMetaData.getColumnType(int).typeName- The column type name used by the database (e.g. "BIGINT UNSIGNED"). This is sometimes used to determine the target data type whensqlTypeis not sufficient if multiple database types are conflated into a single id. This value is returned byResultSetMetaData.getColumnTypeName(int).size- The size of the type, e.g. the maximum precision for numeric types, length for character string, etc. This value is returned byResultSetMetaData.getPrecision(int).md- Result metadata associated with this type. This contains additional information fromResultSetMetaDataor user specified options.-
isTimestampNTZ: Whether read a TIMESTAMP WITHOUT TIME ZONE value asTimestampNTZTypeor not. This is configured byJDBCOptions.preferTimestampNTZ. -
scale: The length of fractional partResultSetMetaData.getScale(int)
-
- Returns:
- An option the actual DataType (subclasses of
DataType) or None if the default type mapping should be used.
-
TIMESTAMP WITHOUT TIME ZONE if preferTimestampNTZ ->
-
getJDBCType
Description copied from class:JdbcDialectRetrieve the jdbc / sql type for a given datatype.- Overrides:
getJDBCTypein classJdbcDialect- Parameters:
dt- The datatype (e.g.StringType)- Returns:
- The new JdbcType if there is an override for this DataType
-
compileValue
Description copied from class:JdbcDialectConverts value to SQL expression.- Overrides:
compileValuein classJdbcDialect- Parameters:
value- The value to be converted.- Returns:
- Converted value.
-
isCascadingTruncateTable
Description copied from class:JdbcDialectReturn Some[true] iffTRUNCATE TABLEcauses cascading default. Some[true] : TRUNCATE TABLE causes cascading. Some[false] : TRUNCATE TABLE does not cause cascading. None: The behavior of TRUNCATE TABLE is unknown (default).- Overrides:
isCascadingTruncateTablein classJdbcDialect- Returns:
- (undocumented)
-
getTruncateQuery
The SQL query used to truncate a table.- Overrides:
getTruncateQueryin classJdbcDialect- Parameters:
table- The table to truncatecascade- Whether or not to cascade the truncation. Default value is the value of isCascadingTruncateTable()- Returns:
- The SQL query to use for truncating a table
-
getAddColumnQuery
- Overrides:
getAddColumnQueryin classJdbcDialect
-
getUpdateColumnTypeQuery
- Overrides:
getUpdateColumnTypeQueryin classJdbcDialect
-
getUpdateColumnNullabilityQuery
public String getUpdateColumnNullabilityQuery(String tableName, String columnName, boolean isNullable) - Overrides:
getUpdateColumnNullabilityQueryin classJdbcDialect
-
getLimitClause
Description copied from class:JdbcDialectReturns the LIMIT clause for the SELECT statement- Overrides:
getLimitClausein classJdbcDialect- Parameters:
limit- (undocumented)- Returns:
- (undocumented)
-
getOffsetClause
Description copied from class:JdbcDialectReturns the OFFSET clause for the SELECT statement- Overrides:
getOffsetClausein classJdbcDialect- Parameters:
offset- (undocumented)- Returns:
- (undocumented)
-
getJdbcSQLQueryBuilder
public JdbcSQLQueryBuilder getJdbcSQLQueryBuilder(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Description copied from class:JdbcDialectReturns the SQL builder for the SELECT statement.- Overrides:
getJdbcSQLQueryBuilderin classJdbcDialect- Parameters:
options- (undocumented)- Returns:
- (undocumented)
-
supportsLimit
public boolean supportsLimit()Description copied from class:JdbcDialectReturns ture if dialect supports LIMIT clause.Note: Some build-in dialect supports LIMIT clause with some trick, please see:
OracleDialect.OracleSQLQueryBuilderandMsSqlServerDialect.MsSqlServerSQLQueryBuilder.- Overrides:
supportsLimitin classJdbcDialect- Returns:
- (undocumented)
-
supportsOffset
public boolean supportsOffset()Description copied from class:JdbcDialectReturns ture if dialect supports OFFSET clause.Note: Some build-in dialect supports OFFSET clause with some trick, please see:
OracleDialect.OracleSQLQueryBuilderandMySQLDialect.MySQLSQLQueryBuilder.- Overrides:
supportsOffsetin classJdbcDialect- Returns:
- (undocumented)
-
supportsHint
public boolean supportsHint()- Overrides:
supportsHintin classJdbcDialect
-
classifyException
public Throwable classifyException(Throwable e, String condition, scala.collection.immutable.Map<String, String> messageParameters, String description, boolean isRuntime) Description copied from class:JdbcDialectGets a dialect exception, classifies it and wraps it byAnalysisException.- Specified by:
classifyExceptionin interfaceNoLegacyJDBCError- Overrides:
classifyExceptionin classJdbcDialect- Parameters:
e- The dialect specific exception.condition- The error condition assigned in the case of an unclassifiedemessageParameters- The message parameters oferrorClassdescription- The error descriptionisRuntime- Whether the exception is a runtime exception or not.- Returns:
SparkThrowable + Throwableor its sub-class.
-