public class HiveTableScan extends SparkPlan implements LeafNode, scala.Product, scala.Serializable
Constructor and Description |
---|
HiveTableScan(scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> requestedAttributes,
MetastoreRelation relation,
scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> partitionPruningPred,
HiveContext context) |
Modifier and Type | Method and Description |
---|---|
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.AttributeReference> |
attributes() |
HiveContext |
context() |
RDD<org.apache.spark.sql.catalyst.expressions.Row> |
execute()
Runs this query returning the result as an RDD.
|
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.AttributeReference> |
output() |
scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> |
partitionPruningPred() |
scala.collection.Seq<org.apache.hadoop.hive.ql.metadata.Partition> |
prunePartitions(scala.collection.Seq<org.apache.hadoop.hive.ql.metadata.Partition> partitions)
Prunes partitions not involve the query plan.
|
MetastoreRelation |
relation() |
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> |
requestedAttributes() |
codegenEnabled, executeCollect, makeCopy, outputPartitioning, requiredChildDistribution
expressions, inputSet, missingInput, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionUp$1, outputSet, printSchema, references, schema, schemaString, simpleString, statePrefix, transformAllExpressions, transformExpressions, transformExpressionsDown, transformExpressionsUp
apply, argString, asCode, children, collect, fastEquals, flatMap, foreach, generateTreeString, getNodeNumbered, map, mapChildren, nodeName, numberedTreeString, otherCopyArgs, stringArgs, toString, transform, transformChildrenDown, transformChildrenUp, transformDown, transformUp, treeString, withNewChildren
productArity, productElement, productIterator, productPrefix
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public HiveTableScan(scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> requestedAttributes, MetastoreRelation relation, scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> partitionPruningPred, HiveContext context)
public scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> requestedAttributes()
public MetastoreRelation relation()
public scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> partitionPruningPred()
public HiveContext context()
public scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.AttributeReference> attributes()
public scala.collection.Seq<org.apache.hadoop.hive.ql.metadata.Partition> prunePartitions(scala.collection.Seq<org.apache.hadoop.hive.ql.metadata.Partition> partitions)
partitions
- All partitions of the relation.public RDD<org.apache.spark.sql.catalyst.expressions.Row> execute()
SparkPlan
public scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.AttributeReference> output()