abstract class DataStreamReader extends AnyRef
Interface used to load a streaming Dataset from external storage systems (e.g. file systems,
key-value stores, etc). Use SparkSession.readStream to access this.
- Annotations
- @Evolving()
- Source
- DataStreamReader.scala
- Since
- 2.0.0 
- Alphabetic
- By Inheritance
- DataStreamReader
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
-  new DataStreamReader()
Abstract Value Members
-   abstract  def assertNoSpecifiedSchema(operation: String): Unit- Attributes
- protected
 
-   abstract  def format(source: String): DataStreamReader.this.typeSpecifies the input data source format. Specifies the input data source format. - Since
- 2.0.0 
 
-   abstract  def load(path: String): DataFrameLoads input in as a DataFrame, for data streams that read from some path.Loads input in as a DataFrame, for data streams that read from some path.- Since
- 2.0.0 
 
-   abstract  def load(): DataFrameLoads input data stream in as a DataFrame, for data streams that don't require a path (e.g.Loads input data stream in as a DataFrame, for data streams that don't require a path (e.g. external key-value stores).- Since
- 2.0.0 
 
-   abstract  def option(key: String, value: String): DataStreamReader.this.typeAdds an input option for the underlying data source. Adds an input option for the underlying data source. - Since
- 2.0.0 
 
-   abstract  def options(options: Map[String, String]): DataStreamReader.this.type(Scala-specific) Adds input options for the underlying data source. (Scala-specific) Adds input options for the underlying data source. - Since
- 2.0.0 
 
-   abstract  def schema(schema: StructType): DataStreamReader.this.typeSpecifies the input schema. Specifies the input schema. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading. - Since
- 2.0.0 
 
-   abstract  def table(tableName: String): DataFrameDefine a Streaming DataFrame on a Table. Define a Streaming DataFrame on a Table. The DataSource corresponding to the table should support streaming mode. - tableName
- The name of the table 
 - Since
- 3.1.0 
 
Concrete Value Members
-   final  def !=(arg0: Any): Boolean- Definition Classes
- AnyRef → Any
 
-   final  def ##: Int- Definition Classes
- AnyRef → Any
 
-   final  def ==(arg0: Any): Boolean- Definition Classes
- AnyRef → Any
 
-   final  def asInstanceOf[T0]: T0- Definition Classes
- Any
 
-    def clone(): AnyRef- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
 
-    def csv(path: String): DataFrameLoads a CSV file stream and returns the result as a DataFrame.Loads a CSV file stream and returns the result as a DataFrame.This function will go through the input once to determine the input schema if inferSchemais enabled. To avoid going through the entire data once, disableinferSchemaoption or specify the schema explicitly usingschema.You can set the following option(s): - maxFilesPerTrigger(default: no max limit): sets the maximum number of new files to be considered in every trigger.
- maxBytesPerTrigger(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
 You can find the CSV-specific options for reading CSV file stream in <a href="https://spark.apache.org/docs/latest/sql-data-sources-csv.html#data-source-option"> Data Source Option in the version you use. - Since
- 2.0.0 
 
-   final  def eq(arg0: AnyRef): Boolean- Definition Classes
- AnyRef
 
-    def equals(arg0: AnyRef): Boolean- Definition Classes
- AnyRef → Any
 
-   final  def getClass(): Class[_ <: AnyRef]- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
 
-    def hashCode(): Int- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
 
-   final  def isInstanceOf[T0]: Boolean- Definition Classes
- Any
 
-    def json(path: String): DataFrameLoads a JSON file stream and returns the results as a DataFrame.Loads a JSON file stream and returns the results as a DataFrame.JSON Lines (newline-delimited JSON) is supported by default. For JSON (one record per file), set the multiLineoption to true.This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan. You can set the following option(s): - maxFilesPerTrigger(default: no max limit): sets the maximum number of new files to be considered in every trigger.
- maxBytesPerTrigger(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
 You can find the JSON-specific options for reading JSON file stream in <a href="https://spark.apache.org/docs/latest/sql-data-sources-json.html#data-source-option"> Data Source Option in the version you use. - Since
- 2.0.0 
 
-   final  def ne(arg0: AnyRef): Boolean- Definition Classes
- AnyRef
 
-   final  def notify(): Unit- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
 
-   final  def notifyAll(): Unit- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
 
-    def option(key: String, value: Double): DataStreamReader.this.typeAdds an input option for the underlying data source. Adds an input option for the underlying data source. - Since
- 2.0.0 
 
-    def option(key: String, value: Long): DataStreamReader.this.typeAdds an input option for the underlying data source. Adds an input option for the underlying data source. - Since
- 2.0.0 
 
-    def option(key: String, value: Boolean): DataStreamReader.this.typeAdds an input option for the underlying data source. Adds an input option for the underlying data source. - Since
- 2.0.0 
 
-    def options(options: Map[String, String]): DataStreamReader.this.type(Java-specific) Adds input options for the underlying data source. (Java-specific) Adds input options for the underlying data source. - Since
- 2.0.0 
 
-    def orc(path: String): DataFrameLoads a ORC file stream, returning the result as a DataFrame.Loads a ORC file stream, returning the result as a DataFrame.You can set the following option(s): - maxFilesPerTrigger(default: no max limit): sets the maximum number of new files to be considered in every trigger.
- maxBytesPerTrigger(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
 ORC-specific option(s) for reading ORC file stream can be found in Data Source Option in the version you use. - Since
- 2.3.0 
 
-    def parquet(path: String): DataFrameLoads a Parquet file stream, returning the result as a DataFrame.Loads a Parquet file stream, returning the result as a DataFrame.You can set the following option(s): - maxFilesPerTrigger(default: no max limit): sets the maximum number of new files to be considered in every trigger.
- maxBytesPerTrigger(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
 Parquet-specific option(s) for reading Parquet file stream can be found in Data Source Option in the version you use. - Since
- 2.0.0 
 
-    def schema(schemaString: String): DataStreamReader.this.typeSpecifies the schema by using the input DDL-formatted string. Specifies the schema by using the input DDL-formatted string. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading. - Since
- 2.3.0 
 
-   final  def synchronized[T0](arg0: => T0): T0- Definition Classes
- AnyRef
 
-    def text(path: String): DataFrameLoads text files and returns a DataFramewhose schema starts with a string column named "value", and followed by partitioned columns if there are any.Loads text files and returns a DataFramewhose schema starts with a string column named "value", and followed by partitioned columns if there are any. The text files must be encoded as UTF-8.By default, each line in the text files is a new row in the resulting DataFrame. For example: // Scala: spark.readStream.text("/path/to/directory/") // Java: spark.readStream().text("/path/to/directory/") You can set the following option(s): - maxFilesPerTrigger(default: no max limit): sets the maximum number of new files to be considered in every trigger.
- maxBytesPerTrigger(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
 You can find the text-specific options for reading text files in <a href="https://spark.apache.org/docs/latest/sql-data-sources-text.html#data-source-option"> Data Source Option in the version you use. - Since
- 2.0.0 
 
-    def textFile(path: String): Dataset[String]Loads text file(s) and returns a Datasetof String.Loads text file(s) and returns a Datasetof String. The underlying schema of the Dataset contains a single string column named "value". The text files must be encoded as UTF-8.If the directory structure of the text files contains partitioning information, those are ignored in the resulting Dataset. To include partitioning information as columns, use text.By default, each line in the text file is a new element in the resulting Dataset. For example: // Scala: spark.readStream.textFile("/path/to/spark/README.md") // Java: spark.readStream().textFile("/path/to/spark/README.md") You can set the text-specific options as specified in DataStreamReader.text.- path
- input path 
 - Since
- 2.1.0 
 
-    def toString(): String- Definition Classes
- AnyRef → Any
 
-    def validateJsonSchema(): Unit- Attributes
- protected
 
-    def validateXmlSchema(): Unit- Attributes
- protected
 
-   final  def wait(arg0: Long, arg1: Int): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
 
-   final  def wait(arg0: Long): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
 
-   final  def wait(): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
 
-    def xml(path: String): DataFrameLoads a XML file stream and returns the result as a DataFrame.Loads a XML file stream and returns the result as a DataFrame.This function will go through the input once to determine the input schema if inferSchemais enabled. To avoid going through the entire data once, disableinferSchemaoption or specify the schema explicitly usingschema.You can set the following option(s): - maxFilesPerTrigger(default: no max limit): sets the maximum number of new files to be considered in every trigger.
- maxBytesPerTrigger(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
 You can find the XML-specific options for reading XML file stream in <a href="https://spark.apache.org/docs/latest/sql-data-sources-xml.html#data-source-option"> Data Source Option in the version you use. - Since
- 4.0.0 
 
Deprecated Value Members
-    def finalize(): Unit- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
- (Since version 9)