Package org.apache.spark.resource
Class ExecutorResourceRequests
Object
org.apache.spark.resource.ExecutorResourceRequests
- All Implemented Interfaces:
Serializable
A set of Executor resource requests. This is used in conjunction with the ResourceProfile to
programmatically specify the resources needed for an RDD that will be applied at the
stage level.
- See Also:
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptioncores
(int amount) Specify number of cores per Executor.Specify heap memory.memoryOverhead
(String amount) Specify overhead memory.offHeapMemory
(String amount) Specify off heap memory.pysparkMemory
(String amount) Specify pyspark memory.scala.collection.immutable.Map<String,
ExecutorResourceRequest> requests()
Returns all the resource requests for the executor.(Java-specific) Returns all the resource requests for the executor.Amount of a particular custom resource(GPU, FPGA, etc) to use.toString()
-
Constructor Details
-
ExecutorResourceRequests
public ExecutorResourceRequests()
-
-
Method Details
-
cores
Specify number of cores per Executor. This is a convenient API to addExecutorResourceRequest
for "cores" resource.- Parameters:
amount
- Number of cores to allocate per Executor.- Returns:
- (undocumented)
-
memory
Specify heap memory. The value specified will be converted to MiB. This is a convenient API to addExecutorResourceRequest
for "memory" resource.- Parameters:
amount
- Amount of memory. In the same format as JVM memory strings (e.g. 512m, 2g). Default unit is MiB if not specified.- Returns:
- (undocumented)
-
memoryOverhead
Specify overhead memory. The value specified will be converted to MiB. This is a convenient API to addExecutorResourceRequest
for "memoryOverhead" resource.- Parameters:
amount
- Amount of memory. In the same format as JVM memory strings (e.g. 512m, 2g). Default unit is MiB if not specified.- Returns:
- (undocumented)
-
offHeapMemory
Specify off heap memory. The value specified will be converted to MiB. This value only take effect when MEMORY_OFFHEAP_ENABLED is true. This is a convenient API to addExecutorResourceRequest
for "offHeap" resource.- Parameters:
amount
- Amount of memory. In the same format as JVM memory strings (e.g. 512m, 2g). Default unit is MiB if not specified.- Returns:
- (undocumented)
-
pysparkMemory
Specify pyspark memory. The value specified will be converted to MiB. This is a convenient API to addExecutorResourceRequest
for "pyspark.memory" resource.- Parameters:
amount
- Amount of memory. In the same format as JVM memory strings (e.g. 512m, 2g). Default unit is MiB if not specified.- Returns:
- (undocumented)
-
requests
Returns all the resource requests for the executor.- Returns:
- (undocumented)
-
requestsJMap
(Java-specific) Returns all the resource requests for the executor.- Returns:
- (undocumented)
-
resource
public ExecutorResourceRequests resource(String resourceName, long amount, String discoveryScript, String vendor) Amount of a particular custom resource(GPU, FPGA, etc) to use. The resource names supported correspond to the regular Spark configs with the prefix removed. For instance, resources like GPUs are gpu (spark configs spark.executor.resource.gpu.*). If you pass in a resource that the cluster manager doesn't support the result is undefined, it may error or may just be ignored. This is a convenient API to addExecutorResourceRequest
for custom resources.- Parameters:
resourceName
- Name of the resource.amount
- amount of that resource per executor to use.discoveryScript
- Optional script used to discover the resources. This is required on some cluster managers that don't tell Spark the addresses of the resources allocated. The script runs on Executors startup to of the resources available.vendor
- Optional vendor, required for some cluster managers- Returns:
- (undocumented)
-
toString
-