Package org.apache.spark.sql.sources
Class BaseRelation
Object
org.apache.spark.sql.sources.BaseRelation
Represents a collection of tuples with a known schema. Classes that extend BaseRelation must
 be able to produce the schema of their data in the form of a 
StructType. Concrete
 implementation should inherit from one of the descendant Scan classes, which define various
 abstract methods for execution.
 BaseRelations must also define an equality function that only returns true when the two instances will return the same data. This equality function is used when determining when it is safe to substitute cached results for a given relation.
- Since:
- 1.3.0
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionbooleanWhether does it need to convert the objects in Row to internal representation, for example: java.lang.String to UTF8String java.lang.Decimal to Decimalabstract StructTypeschema()longReturns an estimated size of this relation in bytes.abstract SQLContextFilter[]unhandledFilters(Filter[] filters) Returns the list ofFilters that this datasource may not be able to handle.
- 
Constructor Details- 
BaseRelationpublic BaseRelation()
 
- 
- 
Method Details- 
needConversionpublic boolean needConversion()Whether does it need to convert the objects in Row to internal representation, for example: java.lang.String to UTF8String java.lang.Decimal to DecimalIf needConversionisfalse, buildScan() should return anRDDofInternalRow- Returns:
- (undocumented)
- Since:
- 1.4.0
- Note:
- The internal representation is not stable across releases and thus data sources outside of Spark SQL should leave this as true.
 
- 
schema
- 
sizeInBytespublic long sizeInBytes()Returns an estimated size of this relation in bytes. This information is used by the planner to decide when it is safe to broadcast a relation and can be overridden by sources that know the size ahead of time. By default, the system will assume that tables are too large to broadcast. This method will be called multiple times during query planning and thus should not perform expensive operations for each invocation.- Returns:
- (undocumented)
- Since:
- 1.3.0
- Note:
- It is always better to overestimate size than underestimate, because underestimation could lead to execution plans that are suboptimal (i.e. broadcasting a very large table).
 
- 
sqlContext
- 
unhandledFiltersReturns the list ofFilters that this datasource may not be able to handle. These returnedFilters will be evaluated by Spark SQL after data is output by a scan. By default, this function will return all filters, as it is always safe to double evaluate aFilter. However, specific implementations can override this function to avoid double filtering when they are capable of processing a filter internally.- Parameters:
- filters- (undocumented)
- Returns:
- (undocumented)
- Since:
- 1.6.0
 
 
-