public abstract class JdbcDialect
extends Object
implements scala.Serializable, org.apache.spark.internal.Logging
Currently, the only thing done by the dialect is type mapping.
getCatalystType
is used when reading from a JDBC table and getJDBCType
is used when writing to a JDBC table. If getCatalystType
returns null
,
the default type handling is used for the given JDBC type. Similarly,
if getJDBCType
returns (null, None)
, the default type handling is used
for the given Catalyst type.
Constructor and Description |
---|
JdbcDialect() |
Modifier and Type | Method and Description |
---|---|
String[] |
alterTable(String tableName,
scala.collection.Seq<TableChange> changes,
int dbMajorVersion)
Alter an existing table.
|
void |
beforeFetch(java.sql.Connection connection,
scala.collection.immutable.Map<String,String> properties)
Override connection specific properties to run before a select is made.
|
abstract boolean |
canHandle(String url)
Check if this dialect instance can handle a certain jdbc url.
|
AnalysisException |
classifyException(String message,
Throwable e)
Gets a dialect exception, classifies it and wraps it by
AnalysisException . |
scala.Option<String> |
compileAggregate(AggregateFunc aggFunction)
Deprecated.
use org.apache.spark.sql.jdbc.JdbcDialect.compileExpression instead. Since 3.4.0.
|
scala.Option<String> |
compileExpression(Expression expr)
Converts V2 expression to String representing a SQL expression.
|
Object |
compileValue(Object value)
Converts value to SQL expression.
|
java.sql.Timestamp |
convertJavaTimestampToTimestamp(java.sql.Timestamp t)
Converts an instance of
java.sql.Timestamp to a custom java.sql.Timestamp value. |
java.time.LocalDateTime |
convertJavaTimestampToTimestampNTZ(java.sql.Timestamp t)
Convert java.sql.Timestamp to a LocalDateTime representing the same wall-clock time as the
value stored in a remote database.
|
java.sql.Timestamp |
convertTimestampNTZToJavaTimestamp(java.time.LocalDateTime ldt)
Converts a LocalDateTime representing a TimestampNTZ type to an
instance of
java.sql.Timestamp . |
scala.Function1<Object,java.sql.Connection> |
createConnectionFactory(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
Returns a factory for creating connections to the given JDBC URL.
|
String |
createIndex(String indexName,
Identifier tableIdent,
NamedReference[] columns,
java.util.Map<NamedReference,java.util.Map<String,String>> columnsProperties,
java.util.Map<String,String> properties)
Build a create index SQL statement.
|
void |
createSchema(java.sql.Statement statement,
String schema,
String comment)
Create schema with an optional comment.
|
void |
createTable(java.sql.Statement statement,
String tableName,
String strSchema,
org.apache.spark.sql.execution.datasources.jdbc.JdbcOptionsInWrite options)
Create the table if the table does not exist.
|
String |
dropIndex(String indexName,
Identifier tableIdent)
Build a drop index SQL statement.
|
String |
dropSchema(String schema,
boolean cascade) |
scala.collection.Seq<scala.Tuple2<String,UnboundFunction>> |
functions()
List the user-defined functions in jdbc dialect.
|
String |
getAddColumnQuery(String tableName,
String columnName,
String dataType) |
scala.Option<DataType> |
getCatalystType(int sqlType,
String typeName,
int size,
MetadataBuilder md)
Get the custom datatype mapping for the given jdbc meta information.
|
String |
getDeleteColumnQuery(String tableName,
String columnName) |
String |
getFullyQualifiedQuotedTableName(Identifier ident)
Return the DB-specific quoted and fully qualified table name
|
JdbcSQLQueryBuilder |
getJdbcSQLQueryBuilder(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
Returns the SQL builder for the SELECT statement.
|
scala.Option<JdbcType> |
getJDBCType(DataType dt)
Retrieve the jdbc / sql type for a given datatype.
|
String |
getLimitClause(Integer limit)
Returns the LIMIT clause for the SELECT statement
|
String |
getOffsetClause(Integer offset)
Returns the OFFSET clause for the SELECT statement
|
String |
getRenameColumnQuery(String tableName,
String columnName,
String newName,
int dbMajorVersion) |
String |
getSchemaCommentQuery(String schema,
String comment) |
String |
getSchemaQuery(String table)
The SQL query that should be used to discover the schema of a table.
|
String |
getTableCommentQuery(String table,
String comment) |
String |
getTableExistsQuery(String table)
Get the SQL query that should be used to find if the given table exists.
|
String |
getTableSample(org.apache.spark.sql.execution.datasources.v2.TableSampleInfo sample) |
String |
getTruncateQuery(String table)
The SQL query that should be used to truncate a table.
|
String |
getTruncateQuery(String table,
scala.Option<Object> cascade)
The SQL query that should be used to truncate a table.
|
String |
getUpdateColumnNullabilityQuery(String tableName,
String columnName,
boolean isNullable) |
String |
getUpdateColumnTypeQuery(String tableName,
String columnName,
String newDataType) |
boolean |
indexExists(java.sql.Connection conn,
String indexName,
Identifier tableIdent,
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
Checks whether an index exists
|
scala.Option<Object> |
isCascadingTruncateTable()
Return Some[true] iff
TRUNCATE TABLE causes cascading default. |
boolean |
isSupportedFunction(String funcName)
Returns whether the database supports function.
|
TableIndex[] |
listIndexes(java.sql.Connection conn,
Identifier tableIdent,
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
Lists all the indexes in this table.
|
String[][] |
listSchemas(java.sql.Connection conn,
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
Lists all the schemas in this table.
|
String |
quoteIdentifier(String colName)
Quotes the identifier.
|
String |
removeSchemaCommentQuery(String schema) |
String |
renameTable(Identifier oldTable,
Identifier newTable)
Rename an existing table.
|
String |
renameTable(String oldTable,
String newTable)
Deprecated.
Please override renameTable method with identifiers. Since 3.5.0.
|
boolean |
schemasExists(java.sql.Connection conn,
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options,
String schema)
Check schema exists or not.
|
boolean |
supportsLimit()
Returns ture if dialect supports LIMIT clause.
|
boolean |
supportsOffset()
Returns ture if dialect supports OFFSET clause.
|
boolean |
supportsTableSample() |
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize
public String[] alterTable(String tableName, scala.collection.Seq<TableChange> changes, int dbMajorVersion)
tableName
- The name of the table to be altered.changes
- Changes to apply to the table.dbMajorVersion
- (undocumented)public void beforeFetch(java.sql.Connection connection, scala.collection.immutable.Map<String,String> properties)
connection
- The connection objectproperties
- The connection properties. This is passed through from the relation.public abstract boolean canHandle(String url)
url
- the jdbc url.NullPointerException
- if the url is null.public AnalysisException classifyException(String message, Throwable e)
AnalysisException
.message
- The error message to be placed to the returned exception.e
- The dialect specific exception.AnalysisException
or its sub-class.public scala.Option<String> compileAggregate(AggregateFunc aggFunction)
aggFunction
- The aggregate function to be converted.public scala.Option<String> compileExpression(Expression expr)
expr
- The V2 expression to be converted.public Object compileValue(Object value)
value
- The value to be converted.public java.sql.Timestamp convertJavaTimestampToTimestamp(java.sql.Timestamp t)
java.sql.Timestamp
to a custom java.sql.Timestamp
value.t
- represents a specific instant in time based on
the hybrid calendar which combines Julian and
Gregorian calendars.IllegalArgumentException
- if t is nullpublic java.time.LocalDateTime convertJavaTimestampToTimestampNTZ(java.sql.Timestamp t)
t
- Timestamp returned from JDBC driver getTimestamp method.public java.sql.Timestamp convertTimestampNTZToJavaTimestamp(java.time.LocalDateTime ldt)
java.sql.Timestamp
.ldt
- representing a TimestampNTZType.public scala.Function1<Object,java.sql.Connection> createConnectionFactory(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
options
- - JDBC options that contains url, table and other information.IllegalArgumentException
- if the driver could not open a JDBC connection.public String createIndex(String indexName, Identifier tableIdent, NamedReference[] columns, java.util.Map<NamedReference,java.util.Map<String,String>> columnsProperties, java.util.Map<String,String> properties)
indexName
- the name of the index to be createdtableIdent
- the table on which index to be createdcolumns
- the columns on which index to be createdcolumnsProperties
- the properties of the columns on which index to be createdproperties
- the properties of the index to be createdpublic void createSchema(java.sql.Statement statement, String schema, String comment)
statement
- (undocumented)schema
- (undocumented)comment
- (undocumented)public void createTable(java.sql.Statement statement, String tableName, String strSchema, org.apache.spark.sql.execution.datasources.jdbc.JdbcOptionsInWrite options)
statement
- tableName
- strSchema
- options
- public String dropIndex(String indexName, Identifier tableIdent)
indexName
- the name of the index to be dropped.tableIdent
- the table on which index to be dropped.public String dropSchema(String schema, boolean cascade)
public scala.collection.Seq<scala.Tuple2<String,UnboundFunction>> functions()
public String getAddColumnQuery(String tableName, String columnName, String dataType)
public scala.Option<DataType> getCatalystType(int sqlType, String typeName, int size, MetadataBuilder md)
sqlType
- The sql type (see java.sql.Types)typeName
- The sql type name (e.g. "BIGINT UNSIGNED")size
- The size of the type.md
- Result metadata associated with this type.DataType
)
or null if the default type mapping should be used.public String getDeleteColumnQuery(String tableName, String columnName)
public String getFullyQualifiedQuotedTableName(Identifier ident)
ident
- (undocumented)public scala.Option<JdbcType> getJDBCType(DataType dt)
dt
- The datatype (e.g. StringType
)public JdbcSQLQueryBuilder getJdbcSQLQueryBuilder(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
options
- (undocumented)public String getLimitClause(Integer limit)
limit
- (undocumented)public String getOffsetClause(Integer offset)
offset
- (undocumented)public String getRenameColumnQuery(String tableName, String columnName, String newName, int dbMajorVersion)
public String getSchemaCommentQuery(String schema, String comment)
public String getSchemaQuery(String table)
table
- The name of the table.public String getTableCommentQuery(String table, String comment)
public String getTableExistsQuery(String table)
table
- The name of the table.public String getTableSample(org.apache.spark.sql.execution.datasources.v2.TableSampleInfo sample)
public String getTruncateQuery(String table)
table
- The table to truncatepublic String getTruncateQuery(String table, scala.Option<Object> cascade)
table
- The table to truncatecascade
- Whether or not to cascade the truncationpublic String getUpdateColumnNullabilityQuery(String tableName, String columnName, boolean isNullable)
public String getUpdateColumnTypeQuery(String tableName, String columnName, String newDataType)
public boolean indexExists(java.sql.Connection conn, String indexName, Identifier tableIdent, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
indexName
- the name of the indextableIdent
- the table on which index to be checkedoptions
- JDBCOptions of the tableconn
- (undocumented)indexName
exists in the table with tableName
,
false otherwisepublic scala.Option<Object> isCascadingTruncateTable()
TRUNCATE TABLE
causes cascading default.
Some[true] : TRUNCATE TABLE causes cascading.
Some[false] : TRUNCATE TABLE does not cause cascading.
None: The behavior of TRUNCATE TABLE is unknown (default).public boolean isSupportedFunction(String funcName)
funcName
- Upper-cased function namepublic TableIndex[] listIndexes(java.sql.Connection conn, Identifier tableIdent, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
conn
- (undocumented)tableIdent
- (undocumented)options
- (undocumented)public String[][] listSchemas(java.sql.Connection conn, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
conn
- (undocumented)options
- (undocumented)public String quoteIdentifier(String colName)
colName
- (undocumented)public String removeSchemaCommentQuery(String schema)
public String renameTable(String oldTable, String newTable)
oldTable
- The existing table.newTable
- New name of the table.public String renameTable(Identifier oldTable, Identifier newTable)
oldTable
- The existing table.newTable
- New name of the table.public boolean schemasExists(java.sql.Connection conn, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options, String schema)
conn
- (undocumented)options
- (undocumented)schema
- (undocumented)public boolean supportsLimit()
Note: Some build-in dialect supports LIMIT clause with some trick, please see:
OracleDialect.OracleSQLQueryBuilder
and
MsSqlServerDialect.MsSqlServerSQLQueryBuilder
.
public boolean supportsOffset()
Note: Some build-in dialect supports OFFSET clause with some trick, please see:
OracleDialect.OracleSQLQueryBuilder
and
MySQLDialect.MySQLSQLQueryBuilder
.
public boolean supportsTableSample()