Packages

class StorageLevel extends Externalizable

Developer API

Flags for controlling the storage of an RDD. Each StorageLevel records whether to use memory, or ExternalBlockStore, whether to drop the RDD to disk if it falls out of memory or ExternalBlockStore, whether to keep the data in memory in a serialized format, and whether to replicate the RDD partitions on multiple nodes.

The org.apache.spark.storage.StorageLevel singleton object contains some static constants for commonly useful storage levels. To create your own storage level object, use the factory method of the singleton object (StorageLevel(...)).

Annotations
@DeveloperApi()
Source
StorageLevel.scala
Linear Supertypes
Externalizable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. StorageLevel
  2. Externalizable
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new StorageLevel()

Value Members

  1. def clone(): StorageLevel
    Definition Classes
    StorageLevel → AnyRef
  2. def description: String
  3. def deserialized: Boolean
  4. def equals(other: Any): Boolean
    Definition Classes
    StorageLevel → AnyRef → Any
  5. def hashCode(): Int
    Definition Classes
    StorageLevel → AnyRef → Any
  6. def isValid: Boolean
  7. def readExternal(in: ObjectInput): Unit
    Definition Classes
    StorageLevel → Externalizable
  8. def replication: Int
  9. def toInt: Int
  10. def toString(): String
    Definition Classes
    StorageLevel → AnyRef → Any
  11. def useDisk: Boolean
  12. def useMemory: Boolean
  13. def useOffHeap: Boolean
  14. def writeExternal(out: ObjectOutput): Unit
    Definition Classes
    StorageLevel → Externalizable