Package org.apache.spark.sql.jdbc
Class JdbcDialect
Object
org.apache.spark.sql.jdbc.JdbcDialect
- All Implemented Interfaces:
Serializable
,org.apache.spark.internal.Logging
,scala.Serializable
- Direct Known Subclasses:
AggregatedDialect
public abstract class JdbcDialect
extends Object
implements scala.Serializable, org.apache.spark.internal.Logging
:: DeveloperApi ::
Encapsulates everything (extensions, workarounds, quirks) to handle the
SQL dialect of a certain database or jdbc driver.
Lots of databases define types that aren't explicitly supported
by the JDBC spec. Some JDBC drivers also report inaccurate
information---for instance, BIT(n>1) being reported as a BIT type is quite
common, even though BIT in JDBC is meant for single-bit values. Also, there
does not appear to be a standard name for an unbounded string or binary
type; we use BLOB and CLOB by default but override with database-specific
alternatives when these are absent or do not behave correctly.
Currently, the only thing done by the dialect is type mapping.
getCatalystType
is used when reading from a JDBC table and getJDBCType
is used when writing to a JDBC table. If getCatalystType
returns null
,
the default type handling is used for the given JDBC type. Similarly,
if getJDBCType
returns (null, None)
, the default type handling is used
for the given Catalyst type.
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.SparkShellLoggingFilter
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionString[]
alterTable
(String tableName, scala.collection.Seq<TableChange> changes, int dbMajorVersion) Alter an existing table.void
beforeFetch
(Connection connection, scala.collection.immutable.Map<String, String> properties) Override connection specific properties to run before a select is made.abstract boolean
Check if this dialect instance can handle a certain jdbc url.classifyException
(String message, Throwable e) Gets a dialect exception, classifies it and wraps it byAnalysisException
.scala.Option<String>
compileAggregate
(AggregateFunc aggFunction) Deprecated.use org.apache.spark.sql.jdbc.JdbcDialect.compileExpression instead.scala.Option<String>
compileExpression
(Expression expr) Converts V2 expression to String representing a SQL expression.compileValue
(Object value) Converts value to SQL expression.Converts an instance ofjava.sql.Timestamp
to a customjava.sql.Timestamp
value.Convert java.sql.Timestamp to a LocalDateTime representing the same wall-clock time as the value stored in a remote database.Converts a LocalDateTime representing a TimestampNTZ type to an instance ofjava.sql.Timestamp
.scala.Function1<Object,
Connection> createConnectionFactory
(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Returns a factory for creating connections to the given JDBC URL.createIndex
(String indexName, Identifier tableIdent, NamedReference[] columns, Map<NamedReference, Map<String, String>> columnsProperties, Map<String, String> properties) Build a create index SQL statement.void
createSchema
(Statement statement, String schema, String comment) Create schema with an optional comment.void
createTable
(Statement statement, String tableName, String strSchema, org.apache.spark.sql.execution.datasources.jdbc.JdbcOptionsInWrite options) Create the table if the table does not exist.dropIndex
(String indexName, Identifier tableIdent) Build a drop index SQL statement.dropSchema
(String schema, boolean cascade) scala.collection.Seq<scala.Tuple2<String,
UnboundFunction>> List the user-defined functions in jdbc dialect.getAddColumnQuery
(String tableName, String columnName, String dataType) scala.Option<DataType>
getCatalystType
(int sqlType, String typeName, int size, MetadataBuilder md) Get the custom datatype mapping for the given jdbc meta information.getDeleteColumnQuery
(String tableName, String columnName) Return the DB-specific quoted and fully qualified table namegetJdbcSQLQueryBuilder
(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Returns the SQL builder for the SELECT statement.scala.Option<JdbcType>
getJDBCType
(DataType dt) Retrieve the jdbc / sql type for a given datatype.getLimitClause
(Integer limit) Returns the LIMIT clause for the SELECT statementgetOffsetClause
(Integer offset) Returns the OFFSET clause for the SELECT statementgetRenameColumnQuery
(String tableName, String columnName, String newName, int dbMajorVersion) getSchemaCommentQuery
(String schema, String comment) getSchemaQuery
(String table) The SQL query that should be used to discover the schema of a table.getTableCommentQuery
(String table, String comment) getTableExistsQuery
(String table) Get the SQL query that should be used to find if the given table exists.getTableSample
(org.apache.spark.sql.execution.datasources.v2.TableSampleInfo sample) getTruncateQuery
(String table) The SQL query that should be used to truncate a table.getTruncateQuery
(String table, scala.Option<Object> cascade) The SQL query that should be used to truncate a table.getUpdateColumnNullabilityQuery
(String tableName, String columnName, boolean isNullable) getUpdateColumnTypeQuery
(String tableName, String columnName, String newDataType) boolean
indexExists
(Connection conn, String indexName, Identifier tableIdent, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Checks whether an index existsscala.Option<Object>
Return Some[true] iffTRUNCATE TABLE
causes cascading default.boolean
isSupportedFunction
(String funcName) Returns whether the database supports function.listIndexes
(Connection conn, Identifier tableIdent, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Lists all the indexes in this table.String[][]
listSchemas
(Connection conn, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Lists all the schemas in this table.quoteIdentifier
(String colName) Quotes the identifier.removeSchemaCommentQuery
(String schema) renameTable
(String oldTable, String newTable) Deprecated.Please override renameTable method with identifiers.renameTable
(Identifier oldTable, Identifier newTable) Rename an existing table.boolean
schemasExists
(Connection conn, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options, String schema) Check schema exists or not.boolean
Returns ture if dialect supports LIMIT clause.boolean
Returns ture if dialect supports OFFSET clause.boolean
Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq
-
Constructor Details
-
JdbcDialect
public JdbcDialect()
-
-
Method Details
-
alterTable
public String[] alterTable(String tableName, scala.collection.Seq<TableChange> changes, int dbMajorVersion) Alter an existing table.- Parameters:
tableName
- The name of the table to be altered.changes
- Changes to apply to the table.dbMajorVersion
- (undocumented)- Returns:
- The SQL statements to use for altering the table.
-
beforeFetch
public void beforeFetch(Connection connection, scala.collection.immutable.Map<String, String> properties) Override connection specific properties to run before a select is made. This is in place to allow dialects that need special treatment to optimize behavior.- Parameters:
connection
- The connection objectproperties
- The connection properties. This is passed through from the relation.
-
canHandle
Check if this dialect instance can handle a certain jdbc url.- Parameters:
url
- the jdbc url.- Returns:
- True if the dialect can be applied on the given jdbc url.
- Throws:
NullPointerException
- if the url is null.
-
classifyException
Gets a dialect exception, classifies it and wraps it byAnalysisException
.- Parameters:
message
- The error message to be placed to the returned exception.e
- The dialect specific exception.- Returns:
AnalysisException
or its sub-class.
-
compileAggregate
Deprecated.use org.apache.spark.sql.jdbc.JdbcDialect.compileExpression instead. Since 3.4.0.Converts aggregate function to String representing a SQL expression.- Parameters:
aggFunction
- The aggregate function to be converted.- Returns:
- Converted value.
-
compileExpression
Converts V2 expression to String representing a SQL expression.- Parameters:
expr
- The V2 expression to be converted.- Returns:
- Converted value.
-
compileValue
Converts value to SQL expression.- Parameters:
value
- The value to be converted.- Returns:
- Converted value.
-
convertJavaTimestampToTimestamp
Converts an instance ofjava.sql.Timestamp
to a customjava.sql.Timestamp
value.- Parameters:
t
- represents a specific instant in time based on the hybrid calendar which combines Julian and Gregorian calendars.- Returns:
- the timestamp value to convert to
- Throws:
IllegalArgumentException
- if t is null
-
convertJavaTimestampToTimestampNTZ
Convert java.sql.Timestamp to a LocalDateTime representing the same wall-clock time as the value stored in a remote database. JDBC dialects should override this function to provide implementations that suit their JDBC drivers.- Parameters:
t
- Timestamp returned from JDBC driver getTimestamp method.- Returns:
- A LocalDateTime representing the same wall clock time as the timestamp in database.
-
convertTimestampNTZToJavaTimestamp
Converts a LocalDateTime representing a TimestampNTZ type to an instance ofjava.sql.Timestamp
.- Parameters:
ldt
- representing a TimestampNTZType.- Returns:
- A Java Timestamp representing this LocalDateTime.
-
createConnectionFactory
public scala.Function1<Object,Connection> createConnectionFactory(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Returns a factory for creating connections to the given JDBC URL. In general, creating a connection has nothing to do with JDBC partition id. But sometimes it is needed, such as a database with multiple shard nodes.- Parameters:
options
- - JDBC options that contains url, table and other information.- Returns:
- The factory method for creating JDBC connections with the RDD partition ID. -1 means the connection is being created at the driver side.
- Throws:
IllegalArgumentException
- if the driver could not open a JDBC connection.
-
createIndex
public String createIndex(String indexName, Identifier tableIdent, NamedReference[] columns, Map<NamedReference, Map<String, String>> columnsProperties, Map<String, String> properties) Build a create index SQL statement.- Parameters:
indexName
- the name of the index to be createdtableIdent
- the table on which index to be createdcolumns
- the columns on which index to be createdcolumnsProperties
- the properties of the columns on which index to be createdproperties
- the properties of the index to be created- Returns:
- the SQL statement to use for creating the index.
-
createSchema
Create schema with an optional comment. Empty string means no comment.- Parameters:
statement
- (undocumented)schema
- (undocumented)comment
- (undocumented)
-
createTable
public void createTable(Statement statement, String tableName, String strSchema, org.apache.spark.sql.execution.datasources.jdbc.JdbcOptionsInWrite options) Create the table if the table does not exist. To allow certain options to append when create a new table, which can be table_options or partition_options. E.g., "CREATE TABLE t (name string) ENGINE=InnoDB DEFAULT CHARSET=utf8"- Parameters:
statement
-tableName
-strSchema
-options
-
-
dropIndex
Build a drop index SQL statement.- Parameters:
indexName
- the name of the index to be dropped.tableIdent
- the table on which index to be dropped.- Returns:
- the SQL statement to use for dropping the index.
-
dropSchema
-
functions
List the user-defined functions in jdbc dialect.- Returns:
- a sequence of tuple from function name to user-defined function.
-
getAddColumnQuery
-
getCatalystType
public scala.Option<DataType> getCatalystType(int sqlType, String typeName, int size, MetadataBuilder md) Get the custom datatype mapping for the given jdbc meta information.- Parameters:
sqlType
- The sql type (see java.sql.Types)typeName
- The sql type name (e.g. "BIGINT UNSIGNED")size
- The size of the type.md
- Result metadata associated with this type.- Returns:
- The actual DataType (subclasses of
DataType
) or null if the default type mapping should be used.
-
getDeleteColumnQuery
-
getFullyQualifiedQuotedTableName
Return the DB-specific quoted and fully qualified table name- Parameters:
ident
- (undocumented)- Returns:
- (undocumented)
-
getJDBCType
Retrieve the jdbc / sql type for a given datatype.- Parameters:
dt
- The datatype (e.g.StringType
)- Returns:
- The new JdbcType if there is an override for this DataType
-
getJdbcSQLQueryBuilder
public JdbcSQLQueryBuilder getJdbcSQLQueryBuilder(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Returns the SQL builder for the SELECT statement.- Parameters:
options
- (undocumented)- Returns:
- (undocumented)
-
getLimitClause
Returns the LIMIT clause for the SELECT statement- Parameters:
limit
- (undocumented)- Returns:
- (undocumented)
-
getOffsetClause
Returns the OFFSET clause for the SELECT statement- Parameters:
offset
- (undocumented)- Returns:
- (undocumented)
-
getRenameColumnQuery
-
getSchemaCommentQuery
-
getSchemaQuery
The SQL query that should be used to discover the schema of a table. It only needs to ensure that the result set has the same schema as the table, such as by calling "SELECT * ...". Dialects can override this method to return a query that works best in a particular database.- Parameters:
table
- The name of the table.- Returns:
- The SQL query to use for discovering the schema.
-
getTableCommentQuery
-
getTableExistsQuery
Get the SQL query that should be used to find if the given table exists. Dialects can override this method to return a query that works best in a particular database.- Parameters:
table
- The name of the table.- Returns:
- The SQL query to use for checking the table.
-
getTableSample
-
getTruncateQuery
The SQL query that should be used to truncate a table. Dialects can override this method to return a query that is suitable for a particular database. For PostgreSQL, for instance, a different query is used to prevent "TRUNCATE" affecting other tables.- Parameters:
table
- The table to truncate- Returns:
- The SQL query to use for truncating a table
-
getTruncateQuery
The SQL query that should be used to truncate a table. Dialects can override this method to return a query that is suitable for a particular database. For PostgreSQL, for instance, a different query is used to prevent "TRUNCATE" affecting other tables.- Parameters:
table
- The table to truncatecascade
- Whether or not to cascade the truncation- Returns:
- The SQL query to use for truncating a table
-
getUpdateColumnNullabilityQuery
-
getUpdateColumnTypeQuery
-
indexExists
public boolean indexExists(Connection conn, String indexName, Identifier tableIdent, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Checks whether an index exists- Parameters:
indexName
- the name of the indextableIdent
- the table on which index to be checkedoptions
- JDBCOptions of the tableconn
- (undocumented)- Returns:
- true if the index with
indexName
exists in the table withtableName
, false otherwise
-
isCascadingTruncateTable
Return Some[true] iffTRUNCATE TABLE
causes cascading default. Some[true] : TRUNCATE TABLE causes cascading. Some[false] : TRUNCATE TABLE does not cause cascading. None: The behavior of TRUNCATE TABLE is unknown (default).- Returns:
- (undocumented)
-
isSupportedFunction
Returns whether the database supports function.- Parameters:
funcName
- Upper-cased function name- Returns:
- True if the database supports function.
-
listIndexes
public TableIndex[] listIndexes(Connection conn, Identifier tableIdent, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Lists all the indexes in this table.- Parameters:
conn
- (undocumented)tableIdent
- (undocumented)options
- (undocumented)- Returns:
- (undocumented)
-
listSchemas
public String[][] listSchemas(Connection conn, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Lists all the schemas in this table.- Parameters:
conn
- (undocumented)options
- (undocumented)- Returns:
- (undocumented)
-
quoteIdentifier
Quotes the identifier. This is used to put quotes around the identifier in case the column name is a reserved keyword, or in case it contains characters that require quotes (e.g. space).- Parameters:
colName
- (undocumented)- Returns:
- (undocumented)
-
removeSchemaCommentQuery
-
renameTable
Deprecated.Please override renameTable method with identifiers. Since 3.5.0.Rename an existing table.- Parameters:
oldTable
- The existing table.newTable
- New name of the table.- Returns:
- The SQL statement to use for renaming the table.
-
renameTable
Rename an existing table.- Parameters:
oldTable
- The existing table.newTable
- New name of the table.- Returns:
- The SQL statement to use for renaming the table.
-
schemasExists
public boolean schemasExists(Connection conn, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options, String schema) Check schema exists or not.- Parameters:
conn
- (undocumented)options
- (undocumented)schema
- (undocumented)- Returns:
- (undocumented)
-
supportsLimit
public boolean supportsLimit()Returns ture if dialect supports LIMIT clause.Note: Some build-in dialect supports LIMIT clause with some trick, please see:
OracleDialect.OracleSQLQueryBuilder
andMsSqlServerDialect.MsSqlServerSQLQueryBuilder
.- Returns:
- (undocumented)
-
supportsOffset
public boolean supportsOffset()Returns ture if dialect supports OFFSET clause.Note: Some build-in dialect supports OFFSET clause with some trick, please see:
OracleDialect.OracleSQLQueryBuilder
andMySQLDialect.MySQLSQLQueryBuilder
.- Returns:
- (undocumented)
-
supportsTableSample
public boolean supportsTableSample()
-