Package org.apache.spark.storage
Interface BlockData
- All Known Implementing Classes:
- DiskBlockData
public interface BlockData
Abstracts away how blocks are stored and provides different ways to read the underlying block
 data. Callers should call 
dispose() when they're done with the block.- 
Method SummaryModifier and TypeMethodDescriptionvoiddispose()longsize()org.apache.spark.util.io.ChunkedByteBuffertoChunkedByteBuffer(scala.Function1<Object, ByteBuffer> allocator) toNetty()Returns a Netty-friendly wrapper for the block's data.Returns a Netty-friendly wrapper for the block's data.
- 
Method Details- 
disposevoid dispose()
- 
sizelong size()
- 
toByteBufferByteBuffer toByteBuffer()
- 
toChunkedByteBufferorg.apache.spark.util.io.ChunkedByteBuffer toChunkedByteBuffer(scala.Function1<Object, ByteBuffer> allocator) 
- 
toInputStreamInputStream toInputStream()
- 
toNettyObject toNetty()Returns a Netty-friendly wrapper for the block's data.Please see ManagedBuffer.convertToNetty()for more details.- Returns:
- (undocumented)
 
- 
toNettyForSslObject toNettyForSsl()Returns a Netty-friendly wrapper for the block's data.Please see ManagedBuffer.convertToNettyForSsl()for more details.- Returns:
- (undocumented)
 
 
-