@DeveloperApi
public interface DriverPlugin
SparkPlugin
.Modifier and Type | Method and Description |
---|---|
default java.util.Map<String,String> |
init(SparkContext sc,
PluginContext pluginContext)
Initialize the plugin.
|
default Object |
receive(Object message)
RPC message handler.
|
default void |
registerMetrics(String appId,
PluginContext pluginContext)
Register metrics published by the plugin with Spark's metrics system.
|
default void |
shutdown()
Informs the plugin that the Spark application is shutting down.
|
default java.util.Map<String,String> init(SparkContext sc, PluginContext pluginContext)
This method is called early in the initialization of the Spark driver. Explicitly, it is called before the Spark driver's task scheduler is initialized. This means that a lot of other Spark subsystems may yet not have been initialized. This call also blocks driver initialization.
It's recommended that plugins be careful about what operations are performed in this call, preferably performing expensive operations in a separate thread, or postponing them until the application has fully started.
sc
- The SparkContext loading the plugin.pluginContext
- Additional plugin-specific about the Spark application where the plugin
is running.ExecutorPlugin.init(PluginContext,Map)
method.default void registerMetrics(String appId, PluginContext pluginContext)
This method is called later in the initialization of the Spark application, after most
subsystems are up and the application ID is known. If there are metrics registered in
the registry (PluginContext.metricRegistry()
), then a metrics source with the
plugin name will be created.
Note that even though the metric registry is still accessible after this method is called, registering new metrics after this method is called may result in the metrics not being available.
appId
- The application ID from the cluster manager.pluginContext
- Additional plugin-specific about the Spark application where the plugin
is running.default Object receive(Object message) throws Exception
Plugins can use Spark's RPC system to send messages from executors to the driver (but not the other way around, currently). Messages sent by the executor component of the plugin will be delivered to this method, and the returned value will be sent back to the executor as the reply, if the executor has requested one.
Any exception thrown will be sent back to the executor as an error, in case it is expecting a reply. In case a reply is not expected, a log message will be written to the driver log.
The implementation of this handler should be thread-safe.
Note all plugins share RPC dispatch threads, and this method is called synchronously. So performing expensive operations in this handler may affect the operation of other active plugins. Internal Spark endpoints are not directly affected, though, since they use different threads.
Spark guarantees that the driver component will be ready to receive messages through this handler when executors are started.
message
- The incoming message.Exception
default void shutdown()
This method is called during the driver shutdown phase. It is recommended that plugins not use any Spark functions (e.g. send RPC messages) during this call.