org.apache.spark.sql.hive

LocalHiveContext

class LocalHiveContext extends HiveContext

DEPRECATED: Use HiveContext instead.

Annotations
@deprecated
Deprecated

(Since version 1.1) Use HiveContext instead. It will still create a local metastore if one is not specified. However, note that the default directory is ./metastore_db, not ./metastore

Linear Supertypes
HiveContext, SQLContext, Serializable, Serializable, UDFRegistration, ExpressionConversions, SQLConf, Logging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. LocalHiveContext
  2. HiveContext
  3. SQLContext
  4. Serializable
  5. Serializable
  6. UDFRegistration
  7. ExpressionConversions
  8. SQLConf
  9. Logging
  10. AnyRef
  11. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new LocalHiveContext(sc: SparkContext)

Type Members

  1. implicit class DslAttribute extends AnyRef

    Definition Classes
    ExpressionConversions
  2. implicit class DslExpression extends ImplicitOperators

    Definition Classes
    ExpressionConversions
  3. implicit class DslString extends ImplicitOperators

    Definition Classes
    ExpressionConversions
  4. implicit class DslSymbol extends ImplicitAttribute

    Definition Classes
    ExpressionConversions
  5. abstract class ImplicitAttribute extends ImplicitOperators

    Definition Classes
    ExpressionConversions
  6. abstract class QueryExecution extends HiveContext.QueryExecution

    Extends QueryExecution with hive specific features.

  7. class SparkPlanner extends SparkStrategies

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. def analyze(tableName: String): Unit

    Analyzes the given table in the current database to generate statistics, which will be used in query optimizations.

    Analyzes the given table in the current database to generate statistics, which will be used in query optimizations.

    Right now, it only supports Hive tables and it only updates the size of a Hive table in the Hive metastore.

    Definition Classes
    HiveContext
  7. lazy val analyzer: Analyzer

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContextSQLContext
  8. def applySchema(rowRDD: RDD[Row], schema: StructType): SchemaRDD

    :: DeveloperApi :: Creates a SchemaRDD from an RDD containing Rows by applying a schema to this RDD.

    :: DeveloperApi :: Creates a SchemaRDD from an RDD containing Rows by applying a schema to this RDD. It is important to make sure that the structure of every Row of the provided RDD matches the provided schema. Otherwise, there will be runtime exception. Example:

    import org.apache.spark.sql._
    val sqlContext = new org.apache.spark.sql.SQLContext(sc)
    
    val schema =
      StructType(
        StructField("name", StringType, false) ::
        StructField("age", IntegerType, true) :: Nil)
    
    val people =
      sc.textFile("examples/src/main/resources/people.txt").map(
        _.split(",")).map(p => Row(p(0), p(1).trim.toInt))
    val peopleSchemaRDD = sqlContext. applySchema(people, schema)
    peopleSchemaRDD.printSchema
    // root
    // |-- name: string (nullable = false)
    // |-- age: integer (nullable = true)
    
      peopleSchemaRDD.registerTempTable("people")
    sqlContext.sql("select name from people").collect.foreach(println)
    Definition Classes
    SQLContext
    Annotations
    @DeveloperApi()
  9. def approxCountDistinct(e: Expression, rsd: Double): ApproxCountDistinct

    Definition Classes
    ExpressionConversions
  10. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  11. def avg(e: Expression): Average

    Definition Classes
    ExpressionConversions
  12. implicit def binaryToLiteral(a: Array[Byte]): Literal

    Definition Classes
    ExpressionConversions
  13. implicit def booleanToLiteral(b: Boolean): Literal

    Definition Classes
    ExpressionConversions
  14. implicit def byteToLiteral(b: Byte): Literal

    Definition Classes
    ExpressionConversions
  15. def cacheTable(tableName: String): Unit

    Caches the specified table in-memory.

    Caches the specified table in-memory.

    Definition Classes
    SQLContext
  16. lazy val catalog: HiveMetastoreCatalog with OverrideCatalog

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContextSQLContext
  17. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  18. def configure(): Unit

    Sets up the system initially or after a RESET command

    Sets up the system initially or after a RESET command

    Attributes
    protected
  19. def count(e: Expression): Count

    Definition Classes
    ExpressionConversions
  20. def countDistinct(e: Expression*): CountDistinct

    Definition Classes
    ExpressionConversions
  21. def createParquetFile[A <: Product](path: String, allowExisting: Boolean = true, conf: Configuration = new Configuration())(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): SchemaRDD

    :: Experimental :: Creates an empty parquet file with the schema of class A, which can be registered as a table.

    :: Experimental :: Creates an empty parquet file with the schema of class A, which can be registered as a table. This registered table can be used as the target of future insertInto operations.

    val sqlContext = new SQLContext(...)
    import sqlContext._
    
    case class Person(name: String, age: Int)
    createParquetFile[Person]("path/to/file.parquet").registerTempTable("people")
    sql("INSERT INTO people SELECT 'michael', 29")
    A

    A case class type that describes the desired schema of the parquet file to be created.

    path

    The path where the directory containing parquet metadata should be created. Data inserted into this table will also be stored at this location.

    allowExisting

    When false, an exception will be thrown if this directory already exists.

    conf

    A Hadoop configuration object that can be used to specify options to the parquet output format.

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  22. implicit def createSchemaRDD[A <: Product](rdd: RDD[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): SchemaRDD

    Creates a SchemaRDD from an RDD of case classes.

    Creates a SchemaRDD from an RDD of case classes.

    Definition Classes
    SQLContext
  23. def createTable[A <: Product](tableName: String, allowExisting: Boolean = true)(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): Unit

    Creates a table using the schema of the given class.

    Creates a table using the schema of the given class.

    A

    A case class that is used to describe the schema of the table to be created.

    tableName

    The name of the table to create.

    allowExisting

    When false, an exception will be thrown if the table already exists.

    Definition Classes
    HiveContext
  24. implicit def decimalToLiteral(d: BigDecimal): Literal

    Definition Classes
    ExpressionConversions
  25. implicit def doubleToLiteral(d: Double): Literal

    Definition Classes
    ExpressionConversions
  26. lazy val emptyResult: RDD[Row]

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  27. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  28. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  29. def executePlan(plan: LogicalPlan): QueryExecution

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContextSQLContext
  30. def executeSql(sql: String): LocalHiveContext.QueryExecution

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  31. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  32. def first(e: Expression): First

    Definition Classes
    ExpressionConversions
  33. implicit def floatToLiteral(f: Float): Literal

    Definition Classes
    ExpressionConversions
  34. lazy val functionRegistry: HiveFunctionRegistry with OverrideFunctionRegistry

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContextSQLContext
  35. def getAllConfs: Map[String, String]

    Return all the configuration properties that have been set (i.

    Return all the configuration properties that have been set (i.e. not the default). This creates a new copy of the config properties in the form of a Map.

    Definition Classes
    SQLConf
  36. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  37. def getConf(key: String, defaultValue: String): String

    Return the value of Spark SQL configuration property for the given key.

    Return the value of Spark SQL configuration property for the given key. If the key is not set yet, return defaultValue.

    Definition Classes
    SQLConf
  38. def getConf(key: String): String

    Return the value of Spark SQL configuration property for the given key.

    Return the value of Spark SQL configuration property for the given key.

    Definition Classes
    SQLConf
  39. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  40. val hivePlanner: SparkPlanner with HiveStrategies

    Definition Classes
    HiveContext
  41. lazy val hiveconf: HiveConf

    SQLConf and HiveConf contracts:

    SQLConf and HiveConf contracts:

    1. reuse existing started SessionState if any 2. when the Hive session is first initialized, params in HiveConf will get picked up by the SQLConf. Additionally, any properties set by set() or a SET command inside sql() will be set in the SQLConf *as well as* in the HiveConf.

    Attributes
    protected[org.apache.spark.sql.hive]
    Definition Classes
    HiveContext
  42. implicit def intToLiteral(i: Int): Literal

    Definition Classes
    ExpressionConversions
  43. def isCached(tableName: String): Boolean

    Returns true if the table is currently cached in-memory.

    Returns true if the table is currently cached in-memory.

    Definition Classes
    SQLContext
  44. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  45. def isTraceEnabled(): Boolean

    Attributes
    protected
    Definition Classes
    Logging
  46. def jsonFile(path: String, samplingRatio: Double): SchemaRDD

    :: Experimental ::

    :: Experimental ::

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  47. def jsonFile(path: String, schema: StructType): SchemaRDD

    :: Experimental :: Loads a JSON file (one object per line) and applies the given schema, returning the result as a SchemaRDD.

    :: Experimental :: Loads a JSON file (one object per line) and applies the given schema, returning the result as a SchemaRDD.

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  48. def jsonFile(path: String): SchemaRDD

    Loads a JSON file (one object per line), returning the result as a SchemaRDD.

    Loads a JSON file (one object per line), returning the result as a SchemaRDD. It goes through the entire dataset once to determine the schema.

    Definition Classes
    SQLContext
  49. def jsonRDD(json: RDD[String], samplingRatio: Double): SchemaRDD

    :: Experimental ::

    :: Experimental ::

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  50. def jsonRDD(json: RDD[String], schema: StructType): SchemaRDD

    :: Experimental :: Loads an RDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a SchemaRDD.

    :: Experimental :: Loads an RDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a SchemaRDD.

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  51. def jsonRDD(json: RDD[String]): SchemaRDD

    Loads an RDD[String] storing JSON objects (one object per record), returning the result as a SchemaRDD.

    Loads an RDD[String] storing JSON objects (one object per record), returning the result as a SchemaRDD. It goes through the entire dataset once to determine the schema.

    Definition Classes
    SQLContext
  52. def log: Logger

    Attributes
    protected
    Definition Classes
    Logging
  53. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  54. def logDebug(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  55. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  56. def logError(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  57. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  58. def logInfo(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  59. def logName: String

    Attributes
    protected
    Definition Classes
    Logging
  60. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  61. def logTrace(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  62. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  63. def logWarning(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  64. implicit def logicalPlanToSparkQuery(plan: LogicalPlan): SchemaRDD

    :: DeveloperApi :: Allows catalyst LogicalPlans to be executed as a SchemaRDD.

    :: DeveloperApi :: Allows catalyst LogicalPlans to be executed as a SchemaRDD. Note that the LogicalPlan interface is considered internal, and thus not guaranteed to be stable. As a result, using them directly is not recommended.

    Definition Classes
    SQLContext
    Annotations
    @DeveloperApi()
  65. implicit def longToLiteral(l: Long): Literal

    Definition Classes
    ExpressionConversions
  66. def lower(e: Expression): Lower

    Definition Classes
    ExpressionConversions
  67. def max(e: Expression): Max

    Definition Classes
    ExpressionConversions
  68. lazy val metastorePath: String

  69. def min(e: Expression): Min

    Definition Classes
    ExpressionConversions
  70. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  71. final def notify(): Unit

    Definition Classes
    AnyRef
  72. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  73. val optimizer: Optimizer.type

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  74. lazy val outputBuffer: OutputStream { ... /* 4 definitions in type refinement */ }

    Attributes
    protected
    Definition Classes
    HiveContext
  75. def parquetFile(path: String): SchemaRDD

    Loads a Parquet file, returning the result as a SchemaRDD.

    Loads a Parquet file, returning the result as a SchemaRDD.

    Definition Classes
    SQLContext
  76. def parseSql(sql: String): LogicalPlan

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  77. val parser: SqlParser

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  78. val planner: SparkPlanner with HiveStrategies

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContextSQLContext
  79. val prepareForExecution: RuleExecutor[SparkPlan] { val batches: List[this.Batch] }

    Prepares a planned SparkPlan for execution by inserting shuffle operations as needed.

    Prepares a planned SparkPlan for execution by inserting shuffle operations as needed.

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  80. def registerFunction[T](name: String, func: Function22[_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  81. def registerFunction[T](name: String, func: Function21[_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  82. def registerFunction[T](name: String, func: Function20[_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  83. def registerFunction[T](name: String, func: Function19[_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  84. def registerFunction[T](name: String, func: Function18[_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  85. def registerFunction[T](name: String, func: Function17[_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  86. def registerFunction[T](name: String, func: Function16[_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  87. def registerFunction[T](name: String, func: Function15[_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  88. def registerFunction[T](name: String, func: Function14[_, _, _, _, _, _, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  89. def registerFunction[T](name: String, func: Function13[_, _, _, _, _, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  90. def registerFunction[T](name: String, func: Function12[_, _, _, _, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  91. def registerFunction[T](name: String, func: Function11[_, _, _, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  92. def registerFunction[T](name: String, func: Function10[_, _, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  93. def registerFunction[T](name: String, func: Function9[_, _, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  94. def registerFunction[T](name: String, func: Function8[_, _, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  95. def registerFunction[T](name: String, func: Function7[_, _, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  96. def registerFunction[T](name: String, func: Function6[_, _, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  97. def registerFunction[T](name: String, func: Function5[_, _, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  98. def registerFunction[T](name: String, func: Function4[_, _, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  99. def registerFunction[T](name: String, func: Function3[_, _, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  100. def registerFunction[T](name: String, func: Function2[_, _, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    Definition Classes
    UDFRegistration
  101. def registerFunction[T](name: String, func: Function1[_, T])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    registerFunction 1-22 were generated by this script

    registerFunction 1-22 were generated by this script

    (1 to 22).map { x => val types = (1 to x).map(x => "_").reduce(_ + ", " + _) s""" def registerFunction[T: TypeTag](name: String, func: Function$x[$types, T]): Unit = { def builder(e: Seq[Expression]) = ScalaUdf(func, ScalaReflection.schemaFor(typeTag[T]).dataType, e) functionRegistry.registerFunction(name, builder) } """ }

    Definition Classes
    UDFRegistration
  102. def registerRDDAsTable(rdd: SchemaRDD, tableName: String): Unit

    Registers the given RDD as a temporary table in the catalog.

    Registers the given RDD as a temporary table in the catalog. Temporary tables exist only during the lifetime of this instance of SQLContext.

    Definition Classes
    SQLContext
  103. def runHive(cmd: String, maxRows: Int = 1000): Seq[String]

    Execute the command using Hive and return the results as a sequence.

    Execute the command using Hive and return the results as a sequence. Each element in the sequence is one row.

    Attributes
    protected
    Definition Classes
    HiveContext
  104. def runSqlHive(sql: String): Seq[String]

    Runs the specified SQL query using Hive.

    Runs the specified SQL query using Hive.

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext
  105. lazy val sessionState: SessionState

    SQLConf and HiveConf contracts:

    SQLConf and HiveConf contracts:

    1. reuse existing started SessionState if any 2. when the Hive session is first initialized, params in HiveConf will get picked up by the SQLConf. Additionally, any properties set by set() or a SET command inside sql() will be set in the SQLConf *as well as* in the HiveConf.

    Attributes
    protected[org.apache.spark.sql.hive]
    Definition Classes
    HiveContext
  106. def setConf(key: String, value: String): Unit

    Set the given Spark SQL configuration property.

    Set the given Spark SQL configuration property.

    Definition Classes
    HiveContext → SQLConf
  107. def setConf(props: Properties): Unit

    Set Spark SQL configuration properties.

    Set Spark SQL configuration properties.

    Definition Classes
    SQLConf
  108. val settings: Map[String, String]

    Only low degree of contention is expected for conf, thus NOT using ConcurrentHashMap.

    Only low degree of contention is expected for conf, thus NOT using ConcurrentHashMap.

    Attributes
    protected[org.apache.spark]
    Definition Classes
    SQLConf
  109. implicit def shortToLiteral(s: Short): Literal

    Definition Classes
    ExpressionConversions
  110. val sparkContext: SparkContext

    Definition Classes
    SQLContext
  111. def sql(sqlText: String): SchemaRDD

    Executes a SQL query using Spark, returning the result as a SchemaRDD.

    Executes a SQL query using Spark, returning the result as a SchemaRDD. The dialect that is used for SQL parsing can be configured with 'spark.sql.dialect'.

    Definition Classes
    HiveContextSQLContext
  112. implicit def stringToLiteral(s: String): Literal

    Definition Classes
    ExpressionConversions
  113. def sum(e: Expression): Sum

    Definition Classes
    ExpressionConversions
  114. def sumDistinct(e: Expression): SumDistinct

    Definition Classes
    ExpressionConversions
  115. implicit def symbolToUnresolvedAttribute(s: Symbol): UnresolvedAttribute

    Definition Classes
    ExpressionConversions
  116. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  117. def table(tableName: String): SchemaRDD

    Returns the specified table as a SchemaRDD

    Returns the specified table as a SchemaRDD

    Definition Classes
    SQLContext
  118. implicit def timestampToLiteral(t: Timestamp): Literal

    Definition Classes
    ExpressionConversions
  119. def toString(): String

    Definition Classes
    AnyRef → Any
  120. def uncacheTable(tableName: String): Unit

    Removes the specified table from the in-memory cache.

    Removes the specified table from the in-memory cache.

    Definition Classes
    SQLContext
  121. def upper(e: Expression): Upper

    Definition Classes
    ExpressionConversions
  122. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  123. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  124. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  125. lazy val warehousePath: String

Deprecated Value Members

  1. def hiveql(hqlQuery: String): SchemaRDD

    Definition Classes
    HiveContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.1)

  2. def hql(hqlQuery: String): SchemaRDD

    Definition Classes
    HiveContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.1)

Inherited from HiveContext

Inherited from SQLContext

Inherited from Serializable

Inherited from Serializable

Inherited from UDFRegistration

Inherited from ExpressionConversions

Inherited from SQLConf

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Spark SQL Functions

Ungrouped