trait CachedBatchSerializer extends Serializable
Provides APIs that handle transformations of SQL data associated with the cache/persist APIs.
- Annotations
- @DeveloperApi() @Since( "3.1.0" )
- Source
- CachedBatchSerializer.scala
- Alphabetic
- By Inheritance
- CachedBatchSerializer
- Serializable
- Serializable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Abstract Value Members
-
abstract
def
buildFilter(predicates: Seq[Expression], cachedAttributes: Seq[Attribute]): (Int, Iterator[CachedBatch]) ⇒ Iterator[CachedBatch]
Builds a function that can be used to filter batches prior to being decompressed.
Builds a function that can be used to filter batches prior to being decompressed. In most cases extending SimpleMetricsCachedBatchSerializer will provide the filter logic necessary. You will need to provide metrics for this to work. SimpleMetricsCachedBatch provides the APIs to hold those metrics and explains the metrics used, really just min and max. Note that this is intended to skip batches that are not needed, and the actual filtering of individual rows is handled later.
- predicates
the set of expressions to use for filtering.
- cachedAttributes
the schema/attributes of the data that is cached. This can be helpful if you don't store it with the data.
- returns
a function that takes the partition id and the iterator of batches in the partition. It returns an iterator of batches that should be decompressed.
-
abstract
def
convertCachedBatchToColumnarBatch(input: RDD[CachedBatch], cacheAttributes: Seq[Attribute], selectedAttributes: Seq[Attribute], conf: SQLConf): RDD[ColumnarBatch]
Convert the cached data into a ColumnarBatch.
Convert the cached data into a ColumnarBatch. This currently is only used if
supportsColumnarOutput()
returns true for the associated schema, but there are other checks that can force row based output. One of the main advantages of doing columnar output over row based output is that the code generation is more standard and can be combined with code generation for downstream operations.- input
the cached batches that should be converted.
- cacheAttributes
the attributes of the data in the batch.
- selectedAttributes
the fields that should be loaded from the data and the order they should appear in the output batch.
- conf
the configuration for the job.
- returns
an RDD of the input cached batches transformed into the ColumnarBatch format.
-
abstract
def
convertCachedBatchToInternalRow(input: RDD[CachedBatch], cacheAttributes: Seq[Attribute], selectedAttributes: Seq[Attribute], conf: SQLConf): RDD[InternalRow]
Convert the cached batch into
InternalRow
s.Convert the cached batch into
InternalRow
s. If you want this to be performant, code generation is advised.- input
the cached batches that should be converted.
- cacheAttributes
the attributes of the data in the batch.
- selectedAttributes
the field that should be loaded from the data and the order they should appear in the output rows.
- conf
the configuration for the job.
- returns
RDD of the rows that were stored in the cached batches.
-
abstract
def
convertColumnarBatchToCachedBatch(input: RDD[ColumnarBatch], schema: Seq[Attribute], storageLevel: StorageLevel, conf: SQLConf): RDD[CachedBatch]
Convert an
RDD[ColumnarBatch]
into anRDD[CachedBatch]
in preparation for caching the data.Convert an
RDD[ColumnarBatch]
into anRDD[CachedBatch]
in preparation for caching the data. This will only be called ifsupportsColumnarInput()
returned true for the given schema and the plan up to this point would could produce columnar output without modifying it.- input
the input
RDD
to be converted.- schema
the schema of the data being stored.
- storageLevel
where the data will be stored.
- conf
the config for the query.
- returns
The data converted into a format more suitable for caching.
-
abstract
def
convertInternalRowToCachedBatch(input: RDD[InternalRow], schema: Seq[Attribute], storageLevel: StorageLevel, conf: SQLConf): RDD[CachedBatch]
Convert an
RDD[InternalRow]
into anRDD[CachedBatch]
in preparation for caching the data.Convert an
RDD[InternalRow]
into anRDD[CachedBatch]
in preparation for caching the data.- input
the input
RDD
to be converted.- schema
the schema of the data being stored.
- storageLevel
where the data will be stored.
- conf
the config for the query.
- returns
The data converted into a format more suitable for caching.
-
abstract
def
supportsColumnarInput(schema: Seq[Attribute]): Boolean
Can
convertColumnarBatchToCachedBatch()
be called instead ofconvertInternalRowToCachedBatch()
for this given schema? True if it can and false if it cannot.Can
convertColumnarBatchToCachedBatch()
be called instead ofconvertInternalRowToCachedBatch()
for this given schema? True if it can and false if it cannot. Columnar input is only supported if the plan could produce columnar output. Currently this is mostly supported by input formats like parquet and orc, but more operations are likely to be supported soon.- schema
the schema of the data being stored.
- returns
True if columnar input can be supported, else false.
-
abstract
def
supportsColumnarOutput(schema: StructType): Boolean
Can
convertCachedBatchToColumnarBatch()
be called instead ofconvertCachedBatchToInternalRow()
for this given schema? True if it can and false if it cannot.Can
convertCachedBatchToColumnarBatch()
be called instead ofconvertCachedBatchToInternalRow()
for this given schema? True if it can and false if it cannot. Columnar output is typically preferred because it is more efficient. Note thatconvertCachedBatchToInternalRow()
must always be supported as there are other checks that can force row based output.- schema
the schema of the data being checked.
- returns
true if columnar output should be used for this schema, else false.
Concrete Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
def
vectorTypes(attributes: Seq[Attribute], conf: SQLConf): Option[Seq[String]]
The exact java types of the columns that are output in columnar processing mode.
The exact java types of the columns that are output in columnar processing mode. This is a performance optimization for code generation and is optional.
- attributes
the attributes to be output.
- conf
the config for the query that will read the data.
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()