Interface FlowExecution
- All Known Subinterfaces:
- StreamingFlowExecution
- All Known Implementing Classes:
- BatchTableWrite,- SinkWrite,- StreamingTableWrite
public interface FlowExecution
A `FlowExecution` specifies how to execute a flow and manages its execution.
- 
Method SummaryModifier and TypeMethodDescriptionThe destination that this `FlowExecution` is writing to.Returns a user-visible name for the flow.scala.Option<Throwable>Returns an optional exception that occurred during execution, if any.voidExecutes this FlowExecution asynchronously to perform its intended update.scala.concurrent.Future<scala.runtime.BoxedUnit>Executes this FlowExecution synchronously to perform its intended update.scala.concurrent.ExecutionContextThe thread execution context for the current `FlowExecution`.scala.concurrent.Future<ExecutionResult>Retrieves the future that can be used to track execution status.Origin to use when recording events for this flow.org.apache.spark.sql.catalyst.TableIdentifierIdentifier of this physical flowbooleanReturns true if and only if thisFlowExecutionhas been completed with either success or an exception.booleanReturns true iff this `FlowExecution` executes using Spark Structured Streaming.org.apache.spark.sql.classic.SparkSessionspark()SparkSession to execute this physical flow with.voidstop()Stops execution of thisFlowExecution.Context about this pipeline update.
- 
Method Details- 
identifierorg.apache.spark.sql.catalyst.TableIdentifier identifier()Identifier of this physical flow
- 
displayNameString displayName()Returns a user-visible name for the flow.- Returns:
- (undocumented)
 
- 
sparkorg.apache.spark.sql.classic.SparkSession spark()SparkSession to execute this physical flow with.The default value for streaming flows is the pipeline's spark session because the source dataframe is resolved using the pipeline's spark session, and a new session will be started implicitly by the streaming query. The default value for batch flows is a cloned spark session from the pipeline's spark session. Please make sure that the execution thread runs in a different spark session than the pipeline's spark session. - Returns:
- (undocumented)
 
- 
getOriginQueryOrigin getOrigin()Origin to use when recording events for this flow.- Returns:
- (undocumented)
 
- 
isCompletedboolean isCompleted()Returns true if and only if thisFlowExecutionhas been completed with either success or an exception.- Returns:
- (undocumented)
 
- 
isStreamingboolean isStreaming()Returns true iff this `FlowExecution` executes using Spark Structured Streaming.
- 
getFuturescala.concurrent.Future<ExecutionResult> getFuture()Retrieves the future that can be used to track execution status.
- 
updateContextPipelineUpdateContext updateContext()Context about this pipeline update.
- 
executionContextscala.concurrent.ExecutionContext executionContext()The thread execution context for the current `FlowExecution`.
- 
stopvoid stop()Stops execution of thisFlowExecution. If you override this, please be sure to callsuper.stop()at the beginning of your method, so we can properly handle errors when a user tries to stop a flow.
- 
exceptionscala.Option<Throwable> exception()Returns an optional exception that occurred during execution, if any.
- 
executeInternalscala.concurrent.Future<scala.runtime.BoxedUnit> executeInternal()Executes this FlowExecution synchronously to perform its intended update. This method should be overridden by subclasses to provide the actual execution logic.- Returns:
- a Future that completes when the execution is finished or stopped.
 
- 
executeAsyncvoid executeAsync()Executes this FlowExecution asynchronously to perform its intended update. A future that can be used to track execution status is saved, and can be retrieved withgetFuture.
- 
destinationOutput destination()The destination that this `FlowExecution` is writing to.
 
-