Modifier and Type | Class and Description |
---|---|
class |
HiveMetastoreCatalog.CreateTables |
class |
HiveMetastoreCatalog.PreInsertionCasts |
Constructor and Description |
---|
HiveMetastoreCatalog(HiveContext hive) |
Modifier and Type | Method and Description |
---|---|
boolean |
caseSensitive() |
void |
createTable(String databaseName,
String tableName,
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> schema,
boolean allowExisting,
scala.Option<org.apache.hadoop.hive.ql.plan.CreateTableDesc> desc)
Create table with specified database, table name, table description and schema
|
org.apache.spark.sql.hive.HiveMetastoreCatalog.CreateTables$ |
CreateTables() |
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan |
lookupRelation(scala.collection.Seq<String> tableIdentifier,
scala.Option<String> alias) |
org.apache.spark.sql.hive.HiveMetastoreCatalog.PreInsertionCasts$ |
PreInsertionCasts() |
void |
registerTable(scala.collection.Seq<String> tableIdentifier,
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan plan)
UNIMPLEMENTED: It needs to be decided how we will persist in-memory tables to the metastore.
|
boolean |
tableExists(scala.collection.Seq<String> tableIdentifier) |
void |
unregisterAllTables() |
void |
unregisterTable(scala.collection.Seq<String> tableIdentifier)
UNIMPLEMENTED: It needs to be decided how we will persist in-memory tables to the metastore.
|
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
getDBTable, getDbTableName, lookupRelation$default$2, processTableIdentifier
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public HiveMetastoreCatalog(HiveContext hive)
public boolean caseSensitive()
caseSensitive
in interface org.apache.spark.sql.catalyst.analysis.Catalog
public boolean tableExists(scala.collection.Seq<String> tableIdentifier)
tableExists
in interface org.apache.spark.sql.catalyst.analysis.Catalog
public org.apache.spark.sql.catalyst.plans.logical.LogicalPlan lookupRelation(scala.collection.Seq<String> tableIdentifier, scala.Option<String> alias)
lookupRelation
in interface org.apache.spark.sql.catalyst.analysis.Catalog
public void createTable(String databaseName, String tableName, scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> schema, boolean allowExisting, scala.Option<org.apache.hadoop.hive.ql.plan.CreateTableDesc> desc)
databaseName
- Database NametableName
- Table Nameschema
- Schema of the new table, if not specified, will use the schema
specified in crtTblallowExisting
- if true, ignore AlreadyExistsExceptiondesc
- CreateTableDesc object which contains the SerDe info. Currently
we support most of the features except the bucket.public org.apache.spark.sql.hive.HiveMetastoreCatalog.CreateTables$ CreateTables()
public org.apache.spark.sql.hive.HiveMetastoreCatalog.PreInsertionCasts$ PreInsertionCasts()
public void registerTable(scala.collection.Seq<String> tableIdentifier, org.apache.spark.sql.catalyst.plans.logical.LogicalPlan plan)
OverrideCatalog
.registerTable
in interface org.apache.spark.sql.catalyst.analysis.Catalog
public void unregisterTable(scala.collection.Seq<String> tableIdentifier)
OverrideCatalog
.unregisterTable
in interface org.apache.spark.sql.catalyst.analysis.Catalog
public void unregisterAllTables()
unregisterAllTables
in interface org.apache.spark.sql.catalyst.analysis.Catalog