public class HiveContext extends SQLContext implements Logging
SQLContext.implicits$
Constructor and Description |
---|
HiveContext(JavaSparkContext sc) |
HiveContext(SparkContext sc) |
Modifier and Type | Method and Description |
---|---|
void |
analyze(String tableName)
Analyzes the given table in the current database to generate statistics, which will be
used in query optimizations.
|
static |
CONVERT_CTAS() |
static |
CONVERT_METASTORE_PARQUET_WITH_SCHEMA_MERGING() |
static |
CONVERT_METASTORE_PARQUET() |
static |
HIVE_EXECUTION_VERSION() |
static |
HIVE_METASTORE_BARRIER_PREFIXES() |
static |
HIVE_METASTORE_JARS() |
static |
HIVE_METASTORE_SHARED_PREFIXES() |
static |
HIVE_METASTORE_VERSION() |
static |
HIVE_THRIFT_SERVER_ASYNC() |
static String |
hiveExecutionVersion()
The version of hive used internally by Spark SQL.
|
HiveContext |
newSession()
Returns a new HiveContext as new session, which will have separated SQLConf, UDF/UDAF,
temporary tables and SessionState, but sharing the same CacheManager, IsolatedClientLoader
and Hive client (both of execution and metadata) with existing HiveContext.
|
static scala.collection.immutable.Map<String,String> |
newTemporaryConfiguration()
Constructs a configuration for hive, where the metastore is located in a temp directory.
|
void |
refreshTable(String tableName)
Invalidate and refresh all the cached the metadata of the given table.
|
void |
setConf(String key,
String value)
Set the given Spark SQL configuration property.
|
applySchema, applySchema, applySchema, applySchema, baseRelationToDataFrame, cacheTable, clearActive, clearCache, createDataFrame, createDataFrame, createDataFrame, createDataFrame, createDataFrame, createDataFrame, createDataFrame, createDataFrame, createDataset, createDataset, createDataset, createExternalTable, createExternalTable, createExternalTable, createExternalTable, createExternalTable, createExternalTable, dropTempTable, emptyDataFrame, experimental, getAllConfs, getConf, getConf, getOrCreate, implicits, isCached, isRootContext, jdbc, jdbc, jdbc, jsonFile, jsonFile, jsonFile, jsonRDD, jsonRDD, jsonRDD, jsonRDD, jsonRDD, jsonRDD, listener, listenerManager, load, load, load, load, load, load, parquetFile, parquetFile, range, range, range, read, setActive, setConf, sparkContext, sql, table, tableNames, tableNames, tables, tables, udf, uncacheTable
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public HiveContext(SparkContext sc)
public HiveContext(JavaSparkContext sc)
public static String hiveExecutionVersion()
public staticHIVE_METASTORE_VERSION()
public staticHIVE_EXECUTION_VERSION()
public staticHIVE_METASTORE_JARS()
public staticCONVERT_METASTORE_PARQUET()
public staticCONVERT_METASTORE_PARQUET_WITH_SCHEMA_MERGING()
public staticCONVERT_CTAS()
public staticHIVE_METASTORE_SHARED_PREFIXES()
public staticHIVE_METASTORE_BARRIER_PREFIXES()
public staticHIVE_THRIFT_SERVER_ASYNC()
public static scala.collection.immutable.Map<String,String> newTemporaryConfiguration()
public HiveContext newSession()
newSession
in class SQLContext
public void refreshTable(String tableName)
tableName
- (undocumented)public void analyze(String tableName)
Right now, it only supports Hive tables and it only updates the size of a Hive table in the Hive metastore.
tableName
- (undocumented)public void setConf(String key, String value)
SQLContext
setConf
in class SQLContext
key
- (undocumented)value
- (undocumented)