Package org.apache.spark.resource
Class ExecutorResourceRequests
Object
org.apache.spark.resource.ExecutorResourceRequests
- All Implemented Interfaces:
- Serializable
A set of Executor resource requests. This is used in conjunction with the ResourceProfile to
 programmatically specify the resources needed for an RDD that will be applied at the
 stage level.
- See Also:
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptioncores(int amount) Specify number of cores per Executor.Specify heap memory.memoryOverhead(String amount) Specify overhead memory.offHeapMemory(String amount) Specify off heap memory.pysparkMemory(String amount) Specify pyspark memory.scala.collection.immutable.Map<String,ExecutorResourceRequest> requests()Returns all the resource requests for the executor.(Java-specific) Returns all the resource requests for the executor.Amount of a particular custom resource(GPU, FPGA, etc) to use.toString()
- 
Constructor Details- 
ExecutorResourceRequestspublic ExecutorResourceRequests()
 
- 
- 
Method Details- 
coresSpecify number of cores per Executor. This is a convenient API to addExecutorResourceRequestfor "cores" resource.- Parameters:
- amount- Number of cores to allocate per Executor.
- Returns:
- (undocumented)
 
- 
memorySpecify heap memory. The value specified will be converted to MiB. This is a convenient API to addExecutorResourceRequestfor "memory" resource.- Parameters:
- amount- Amount of memory. In the same format as JVM memory strings (e.g. 512m, 2g). Default unit is MiB if not specified.
- Returns:
- (undocumented)
 
- 
memoryOverheadSpecify overhead memory. The value specified will be converted to MiB. This is a convenient API to addExecutorResourceRequestfor "memoryOverhead" resource.- Parameters:
- amount- Amount of memory. In the same format as JVM memory strings (e.g. 512m, 2g). Default unit is MiB if not specified.
- Returns:
- (undocumented)
 
- 
offHeapMemorySpecify off heap memory. The value specified will be converted to MiB. This value only take effect when MEMORY_OFFHEAP_ENABLED is true. This is a convenient API to addExecutorResourceRequestfor "offHeap" resource.- Parameters:
- amount- Amount of memory. In the same format as JVM memory strings (e.g. 512m, 2g). Default unit is MiB if not specified.
- Returns:
- (undocumented)
 
- 
pysparkMemorySpecify pyspark memory. The value specified will be converted to MiB. This is a convenient API to addExecutorResourceRequestfor "pyspark.memory" resource.- Parameters:
- amount- Amount of memory. In the same format as JVM memory strings (e.g. 512m, 2g). Default unit is MiB if not specified.
- Returns:
- (undocumented)
 
- 
requestsReturns all the resource requests for the executor.- Returns:
- (undocumented)
 
- 
requestsJMap(Java-specific) Returns all the resource requests for the executor.- Returns:
- (undocumented)
 
- 
resourcepublic ExecutorResourceRequests resource(String resourceName, long amount, String discoveryScript, String vendor) Amount of a particular custom resource(GPU, FPGA, etc) to use. The resource names supported correspond to the regular Spark configs with the prefix removed. For instance, resources like GPUs are gpu (spark configs spark.executor.resource.gpu.*). If you pass in a resource that the cluster manager doesn't support the result is undefined, it may error or may just be ignored. This is a convenient API to addExecutorResourceRequestfor custom resources.- Parameters:
- resourceName- Name of the resource.
- amount- amount of that resource per executor to use.
- discoveryScript- Optional script used to discover the resources. This is required on some cluster managers that don't tell Spark the addresses of the resources allocated. The script runs on Executors startup to of the resources available.
- vendor- Optional vendor, required for some cluster managers
- Returns:
- (undocumented)
 
- 
toString
 
-