trait WorkerCapabilitiesOrBuilder extends MessageOrBuilder
- Alphabetic
- By Inheritance
- WorkerCapabilitiesOrBuilder
- MessageOrBuilder
- MessageLiteOrBuilder
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Abstract Value Members
- abstract def findInitializationErrors(): List[String]
- Definition Classes
- MessageOrBuilder
- abstract def getAllFields(): Map[FieldDescriptor, AnyRef]
- Definition Classes
- MessageOrBuilder
- abstract def getDefaultInstanceForType(): Message
- Definition Classes
- MessageOrBuilder → MessageLiteOrBuilder
- abstract def getDescriptorForType(): Descriptor
- Definition Classes
- MessageOrBuilder
- abstract def getField(field: FieldDescriptor): AnyRef
- Definition Classes
- MessageOrBuilder
- abstract def getInitializationErrorString(): String
- Definition Classes
- MessageOrBuilder
- abstract def getOneofFieldDescriptor(oneof: OneofDescriptor): FieldDescriptor
- Definition Classes
- MessageOrBuilder
- abstract def getRepeatedField(field: FieldDescriptor, index: Int): AnyRef
- Definition Classes
- MessageOrBuilder
- abstract def getRepeatedFieldCount(field: FieldDescriptor): Int
- Definition Classes
- MessageOrBuilder
- abstract def getSupportedCommunicationPatterns(index: Int): UDFProtoCommunicationPattern
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
repeated .org.apache.spark.udf.worker.UDFProtoCommunicationPattern supported_communication_patterns = 2;- index
The index of the element to return.
- returns
The supportedCommunicationPatterns at the given index.
- abstract def getSupportedCommunicationPatternsCount(): Int
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
repeated .org.apache.spark.udf.worker.UDFProtoCommunicationPattern supported_communication_patterns = 2;- returns
The count of supportedCommunicationPatterns.
- abstract def getSupportedCommunicationPatternsList(): List[UDFProtoCommunicationPattern]
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
repeated .org.apache.spark.udf.worker.UDFProtoCommunicationPattern supported_communication_patterns = 2;- returns
A list containing the supportedCommunicationPatterns.
- abstract def getSupportedCommunicationPatternsValue(index: Int): Int
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
repeated .org.apache.spark.udf.worker.UDFProtoCommunicationPattern supported_communication_patterns = 2;- index
The index of the value to return.
- returns
The enum numeric value on the wire of supportedCommunicationPatterns at the given index.
- abstract def getSupportedCommunicationPatternsValueList(): List[Integer]
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
repeated .org.apache.spark.udf.worker.UDFProtoCommunicationPattern supported_communication_patterns = 2;- returns
A list containing the enum numeric values on the wire for supportedCommunicationPatterns.
- abstract def getSupportedDataFormats(index: Int): UDFWorkerDataFormat
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
repeated .org.apache.spark.udf.worker.UDFWorkerDataFormat supported_data_formats = 1;- index
The index of the element to return.
- returns
The supportedDataFormats at the given index.
- abstract def getSupportedDataFormatsCount(): Int
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
repeated .org.apache.spark.udf.worker.UDFWorkerDataFormat supported_data_formats = 1;- returns
The count of supportedDataFormats.
- abstract def getSupportedDataFormatsList(): List[UDFWorkerDataFormat]
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
repeated .org.apache.spark.udf.worker.UDFWorkerDataFormat supported_data_formats = 1;- returns
A list containing the supportedDataFormats.
- abstract def getSupportedDataFormatsValue(index: Int): Int
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
repeated .org.apache.spark.udf.worker.UDFWorkerDataFormat supported_data_formats = 1;- index
The index of the value to return.
- returns
The enum numeric value on the wire of supportedDataFormats at the given index.
- abstract def getSupportedDataFormatsValueList(): List[Integer]
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
repeated .org.apache.spark.udf.worker.UDFWorkerDataFormat supported_data_formats = 1;- returns
A list containing the enum numeric values on the wire for supportedDataFormats.
- abstract def getSupportsConcurrentUdfs(): Boolean
Whether multiple, concurrent UDF connections are supported by this worker (for example via multi-threading). In the first implementation of the engine-side worker specification, this property will not be used. Usage of this property can be enabled in the future if the engine implements more advanced resource management (TBD). (Optional)
Whether multiple, concurrent UDF connections are supported by this worker (for example via multi-threading). In the first implementation of the engine-side worker specification, this property will not be used. Usage of this property can be enabled in the future if the engine implements more advanced resource management (TBD). (Optional)
optional bool supports_concurrent_udfs = 3;- returns
The supportsConcurrentUdfs.
- abstract def getSupportsReuse(): Boolean
Whether compatible workers may be reused. If this is not supported, the worker is terminated after every single UDF invocation. (Optional)
Whether compatible workers may be reused. If this is not supported, the worker is terminated after every single UDF invocation. (Optional)
optional bool supports_reuse = 4;- returns
The supportsReuse.
- abstract def getUnknownFields(): UnknownFieldSet
- Definition Classes
- MessageOrBuilder
- abstract def hasField(field: FieldDescriptor): Boolean
- Definition Classes
- MessageOrBuilder
- abstract def hasOneof(oneof: OneofDescriptor): Boolean
- Definition Classes
- MessageOrBuilder
- abstract def hasSupportsConcurrentUdfs(): Boolean
Whether multiple, concurrent UDF connections are supported by this worker (for example via multi-threading). In the first implementation of the engine-side worker specification, this property will not be used. Usage of this property can be enabled in the future if the engine implements more advanced resource management (TBD). (Optional)
Whether multiple, concurrent UDF connections are supported by this worker (for example via multi-threading). In the first implementation of the engine-side worker specification, this property will not be used. Usage of this property can be enabled in the future if the engine implements more advanced resource management (TBD). (Optional)
optional bool supports_concurrent_udfs = 3;- returns
Whether the supportsConcurrentUdfs field is set.
- abstract def hasSupportsReuse(): Boolean
Whether compatible workers may be reused. If this is not supported, the worker is terminated after every single UDF invocation. (Optional)
Whether compatible workers may be reused. If this is not supported, the worker is terminated after every single UDF invocation. (Optional)
optional bool supports_reuse = 4;- returns
Whether the supportsReuse field is set.
- abstract def isInitialized(): Boolean
- Definition Classes
- MessageLiteOrBuilder
Concrete Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
(Since version 9)