Modifier and Type | Class and Description |
---|---|
class |
HiveMetastoreCatalog.CreateTables |
class |
HiveMetastoreCatalog.ParquetConversions |
class |
HiveMetastoreCatalog.PreInsertionCasts |
class |
HiveMetastoreCatalog.QualifiedTableName
A fully qualified identifier for a table (i.e., database.tableName)
|
Constructor and Description |
---|
HiveMetastoreCatalog(HiveContext hive) |
Modifier and Type | Method and Description |
---|---|
boolean |
caseSensitive() |
void |
createDataSourceTable(String tableName,
scala.Option<StructType> userSpecifiedSchema,
String provider,
scala.collection.immutable.Map<String,String> options,
boolean isExternal)
Creates a data source table (a table created with USING clause) in Hive's metastore.
|
void |
createTable(String databaseName,
String tableName,
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> schema,
boolean allowExisting,
scala.Option<org.apache.hadoop.hive.ql.plan.CreateTableDesc> desc)
Create table with specified database, table name, table description and schema
|
org.apache.spark.sql.hive.HiveMetastoreCatalog.CreateTables$ |
CreateTables() |
scala.collection.Seq<scala.Tuple2<String,Object>> |
getTables(scala.Option<String> databaseName) |
String |
hiveDefaultTableFilePath(String tableName) |
void |
invalidateTable(String databaseName,
String tableName) |
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan |
lookupRelation(scala.collection.Seq<String> tableIdentifier,
scala.Option<String> alias) |
org.apache.spark.sql.hive.HiveMetastoreCatalog.ParquetConversions$ |
ParquetConversions() |
org.apache.spark.sql.hive.HiveMetastoreCatalog.PreInsertionCasts$ |
PreInsertionCasts() |
void |
refreshTable(String databaseName,
String tableName) |
void |
registerTable(scala.collection.Seq<String> tableIdentifier,
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan plan)
UNIMPLEMENTED: It needs to be decided how we will persist in-memory tables to the metastore.
|
boolean |
tableExists(scala.collection.Seq<String> tableIdentifier) |
void |
unregisterAllTables() |
void |
unregisterTable(scala.collection.Seq<String> tableIdentifier)
UNIMPLEMENTED: It needs to be decided how we will persist in-memory tables to the metastore.
|
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
getDBTable, getDbTableName, lookupRelation$default$2, processTableIdentifier
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public HiveMetastoreCatalog(HiveContext hive)
public void refreshTable(String databaseName, String tableName)
refreshTable
in interface org.apache.spark.sql.catalyst.analysis.Catalog
public void invalidateTable(String databaseName, String tableName)
public boolean caseSensitive()
caseSensitive
in interface org.apache.spark.sql.catalyst.analysis.Catalog
public void createDataSourceTable(String tableName, scala.Option<StructType> userSpecifiedSchema, String provider, scala.collection.immutable.Map<String,String> options, boolean isExternal)
public String hiveDefaultTableFilePath(String tableName)
public boolean tableExists(scala.collection.Seq<String> tableIdentifier)
tableExists
in interface org.apache.spark.sql.catalyst.analysis.Catalog
public org.apache.spark.sql.catalyst.plans.logical.LogicalPlan lookupRelation(scala.collection.Seq<String> tableIdentifier, scala.Option<String> alias)
lookupRelation
in interface org.apache.spark.sql.catalyst.analysis.Catalog
public scala.collection.Seq<scala.Tuple2<String,Object>> getTables(scala.Option<String> databaseName)
getTables
in interface org.apache.spark.sql.catalyst.analysis.Catalog
public void createTable(String databaseName, String tableName, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> schema, boolean allowExisting, scala.Option<org.apache.hadoop.hive.ql.plan.CreateTableDesc> desc)
databaseName
- Database NametableName
- Table Nameschema
- Schema of the new table, if not specified, will use the schema
specified in crtTblallowExisting
- if true, ignore AlreadyExistsExceptiondesc
- CreateTableDesc object which contains the SerDe info. Currently
we support most of the features except the bucket.public org.apache.spark.sql.hive.HiveMetastoreCatalog.ParquetConversions$ ParquetConversions()
public org.apache.spark.sql.hive.HiveMetastoreCatalog.CreateTables$ CreateTables()
public org.apache.spark.sql.hive.HiveMetastoreCatalog.PreInsertionCasts$ PreInsertionCasts()
public void registerTable(scala.collection.Seq<String> tableIdentifier, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan plan)
OverrideCatalog
.registerTable
in interface org.apache.spark.sql.catalyst.analysis.Catalog
public void unregisterTable(scala.collection.Seq<String> tableIdentifier)
OverrideCatalog
.unregisterTable
in interface org.apache.spark.sql.catalyst.analysis.Catalog
public void unregisterAllTables()
unregisterAllTables
in interface org.apache.spark.sql.catalyst.analysis.Catalog