abstract class DataStreamWriter[T] extends WriteConfigMethods[DataStreamWriter[T]]
Interface used to write a streaming Dataset to external storage systems (e.g. file systems,
key-value stores, etc). Use Dataset.writeStream to access this.
- Annotations
- @Evolving()
- Source
- DataStreamWriter.scala
- Since
- 2.0.0 
- Alphabetic
- By Inheritance
- DataStreamWriter
- WriteConfigMethods
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
-  new DataStreamWriter()
Abstract Value Members
-   abstract  def clusterBy(colNames: String*): DataStreamWriter.this.typeClusters the output by the given columns. Clusters the output by the given columns. If specified, the output is laid out such that records with similar values on the clustering column are grouped together in the same file. Clustering improves query efficiency by allowing queries with predicates on the clustering columns to skip unnecessary data. Unlike partitioning, clustering can be used on very high cardinality columns. - Annotations
- @varargs()
- Since
- 4.0.0 
 
-   abstract  def foreach(writer: ForeachWriter[T]): DataStreamWriter.this.typeSets the output of the streaming query to be processed using the provided writer object. Sets the output of the streaming query to be processed using the provided writer object. object. See org.apache.spark.sql.ForeachWriter for more details on the lifecycle and semantics. - Since
- 2.0.0 
 
-   abstract  def foreachBatch(function: (Dataset[T], Long) => Unit): DataStreamWriter.this.type:: Experimental :: :: Experimental :: (Scala-specific) Sets the output of the streaming query to be processed using the provided function. This is supported only in the micro-batch execution modes (that is, when the trigger is not continuous). In every micro-batch, the provided function will be called in every micro-batch with (i) the output rows as a Dataset and (ii) the batch identifier. The batchId can be used to deduplicate and transactionally write the output (that is, the provided Dataset) to external systems. The output Dataset is guaranteed to be exactly the same for the same batchId (assuming all operations are deterministic in the query). - Annotations
- @Evolving()
- Since
- 2.4.0 
 
-   abstract  def format(source: String): DataStreamWriter.this.typeSpecifies the underlying output data source. Specifies the underlying output data source. - Since
- 2.0.0 
 
-   abstract  def option(key: String, value: String): DataStreamWriter[T]Add a write option. Add a write option. - Definition Classes
- WriteConfigMethods
- Since
- 3.0.0 
 
-   abstract  def options(options: Map[String, String]): DataStreamWriter[T]Add write options from a Java Map. Add write options from a Java Map. - Definition Classes
- WriteConfigMethods
- Since
- 3.0.0 
 
-   abstract  def options(options: Map[String, String]): DataStreamWriter[T]Add write options from a Scala Map. Add write options from a Scala Map. - Definition Classes
- WriteConfigMethods
- Since
- 3.0.0 
 
-   abstract  def outputMode(outputMode: String): DataStreamWriter.this.typeSpecifies how data of a streaming DataFrame/Dataset is written to a streaming sink. Specifies how data of a streaming DataFrame/Dataset is written to a streaming sink. - append: only the new rows in the streaming DataFrame/Dataset will be written to the sink.-  complete: all the rows in the streaming DataFrame/Dataset will be written to the sink every time there are some updates.
-  update: only the rows that were updated in the streaming DataFrame/Dataset will be written to the sink every time there are some updates. If the query doesn't contain aggregations, it will be equivalent toappendmode.
 - Since
- 2.0.0 
 
-  
-   abstract  def outputMode(outputMode: OutputMode): DataStreamWriter.this.typeSpecifies how data of a streaming DataFrame/Dataset is written to a streaming sink. Specifies how data of a streaming DataFrame/Dataset is written to a streaming sink. - OutputMode.Append(): only the new rows in the streaming DataFrame/Dataset will be written to the sink.-  OutputMode.Complete(): all the rows in the streaming DataFrame/Dataset will be written to the sink every time there are some updates. -OutputMode.Update(): only the rows that were updated in the streaming DataFrame/Dataset will be written to the sink every time there are some updates. If the query doesn't contain aggregations, it will be equivalent toOutputMode.Append()mode.
 - Since
- 2.0.0 
 
-  
-   abstract  def partitionBy(colNames: String*): DataStreamWriter.this.typePartitions the output by the given columns on the file system. Partitions the output by the given columns on the file system. If specified, the output is laid out on the file system similar to Hive's partitioning scheme. As an example, when we partition a dataset by year and then month, the directory layout would look like: - year=2016/month=01/
- year=2016/month=02/
 Partitioning is one of the most widely used techniques to optimize physical data layout. It provides a coarse-grained index for skipping unnecessary data reads when queries have predicates on the partitioned columns. In order for partitioning to work well, the number of distinct values in each column should typically be less than tens of thousands. - Annotations
- @varargs()
- Since
- 2.0.0 
 
-   abstract  def queryName(queryName: String): DataStreamWriter.this.typeSpecifies the name of the org.apache.spark.sql.streaming.StreamingQuery that can be started with start().Specifies the name of the org.apache.spark.sql.streaming.StreamingQuery that can be started with start(). This name must be unique among all the currently active queries in the associated SparkSession.- Since
- 2.0.0 
 
-   abstract  def start(): StreamingQueryStarts the execution of the streaming query, which will continually output results to the given path as new data arrives. Starts the execution of the streaming query, which will continually output results to the given path as new data arrives. The returned org.apache.spark.sql.streaming.StreamingQuery object can be used to interact with the stream. Throws a TimeoutExceptionif the following conditions are met:- Another run of the same streaming query, that is a streaming query sharing the same checkpoint location, is already active on the same Spark Driver
- The SQL configuration spark.sql.streaming.stopActiveRunOnRestartis enabled
- The active run cannot be stopped within the timeout controlled by the SQL configuration
    spark.sql.streaming.stopTimeout
 - Annotations
- @throws("")
- Since
- 2.0.0 
 
-   abstract  def start(path: String): StreamingQueryStarts the execution of the streaming query, which will continually output results to the given path as new data arrives. Starts the execution of the streaming query, which will continually output results to the given path as new data arrives. The returned org.apache.spark.sql.streaming.StreamingQuery object can be used to interact with the stream. - Since
- 2.0.0 
 
-   abstract  def toTable(tableName: String): StreamingQueryStarts the execution of the streaming query, which will continually output results to the given table as new data arrives. Starts the execution of the streaming query, which will continually output results to the given table as new data arrives. The returned org.apache.spark.sql.streaming.StreamingQuery object can be used to interact with the stream. For v1 table, partitioning columns provided by partitionBywill be respected no matter the table exists or not. A new table will be created if the table not exists.For v2 table, partitionBywill be ignored if the table already exists.partitionBywill be respected only if the v2 table does not exist. Besides, the v2 table created by this API lacks some functionalities (e.g., customized properties, options, and serde info). If you need them, please create the v2 table manually before the execution to avoid creating a table with incomplete information.- Annotations
- @Evolving() @throws("")
- Since
- 3.1.0 
 
-   abstract  def trigger(trigger: Trigger): DataStreamWriter.this.typeSet the trigger for the stream query. Set the trigger for the stream query. The default value is ProcessingTime(0)and it will run the query as fast as possible.Scala Example: df.writeStream.trigger(ProcessingTime("10 seconds")) import scala.concurrent.duration._ df.writeStream.trigger(ProcessingTime(10.seconds)) Java Example: df.writeStream().trigger(ProcessingTime.create("10 seconds")) import java.util.concurrent.TimeUnit df.writeStream().trigger(ProcessingTime.create(10, TimeUnit.SECONDS)) - Since
- 2.0.0 
 
Concrete Value Members
-   final  def !=(arg0: Any): Boolean- Definition Classes
- AnyRef → Any
 
-   final  def ##: Int- Definition Classes
- AnyRef → Any
 
-   final  def ==(arg0: Any): Boolean- Definition Classes
- AnyRef → Any
 
-   final  def asInstanceOf[T0]: T0- Definition Classes
- Any
 
-    def clone(): AnyRef- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
 
-   final  def eq(arg0: AnyRef): Boolean- Definition Classes
- AnyRef
 
-    def equals(arg0: AnyRef): Boolean- Definition Classes
- AnyRef → Any
 
-    def foreachBatch(function: VoidFunction2[Dataset[T], Long]): DataStreamWriter.this.type:: Experimental :: :: Experimental :: (Java-specific) Sets the output of the streaming query to be processed using the provided function. This is supported only in the micro-batch execution modes (that is, when the trigger is not continuous). In every micro-batch, the provided function will be called in every micro-batch with (i) the output rows as a Dataset and (ii) the batch identifier. The batchId can be used to deduplicate and transactionally write the output (that is, the provided Dataset) to external systems. The output Dataset is guaranteed to be exactly the same for the same batchId (assuming all operations are deterministic in the query). - Annotations
- @Evolving()
- Since
- 2.4.0 
 
-   final  def getClass(): Class[_ <: AnyRef]- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
 
-    def hashCode(): Int- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
 
-   final  def isInstanceOf[T0]: Boolean- Definition Classes
- Any
 
-   final  def ne(arg0: AnyRef): Boolean- Definition Classes
- AnyRef
 
-   final  def notify(): Unit- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
 
-   final  def notifyAll(): Unit- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
 
-    def option(key: String, value: Double): DataStreamWriter.this.typeAdd a double output option. Add a double output option. - Definition Classes
- DataStreamWriter → WriteConfigMethods
- Since
- 3.0.0 
 
-    def option(key: String, value: Long): DataStreamWriter.this.typeAdd a long output option. Add a long output option. - Definition Classes
- DataStreamWriter → WriteConfigMethods
- Since
- 3.0.0 
 
-    def option(key: String, value: Boolean): DataStreamWriter.this.typeAdd a boolean output option. Add a boolean output option. - Definition Classes
- DataStreamWriter → WriteConfigMethods
- Since
- 3.0.0 
 
-   final  def synchronized[T0](arg0: => T0): T0- Definition Classes
- AnyRef
 
-    def toString(): String- Definition Classes
- AnyRef → Any
 
-   final  def wait(arg0: Long, arg1: Int): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
 
-   final  def wait(arg0: Long): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
 
-   final  def wait(): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
 
Deprecated Value Members
-    def finalize(): Unit- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
- (Since version 9)