Package org.apache.spark.storage
Interface BlockData
- All Known Implementing Classes:
DiskBlockData
public interface BlockData
Abstracts away how blocks are stored and provides different ways to read the underlying block
data. Callers should call
dispose()
when they're done with the block.-
Method Summary
Modifier and TypeMethodDescriptionvoid
dispose()
long
size()
org.apache.spark.util.io.ChunkedByteBuffer
toChunkedByteBuffer
(scala.Function1<Object, ByteBuffer> allocator) toNetty()
Returns a Netty-friendly wrapper for the block's data.
-
Method Details
-
dispose
void dispose() -
size
long size() -
toByteBuffer
ByteBuffer toByteBuffer() -
toChunkedByteBuffer
org.apache.spark.util.io.ChunkedByteBuffer toChunkedByteBuffer(scala.Function1<Object, ByteBuffer> allocator) -
toInputStream
InputStream toInputStream() -
toNetty
Object toNetty()Returns a Netty-friendly wrapper for the block's data.Please see
ManagedBuffer.convertToNetty()
for more details.- Returns:
- (undocumented)
-