final class DataStreamReader extends Logging
Interface used to load a streaming Dataset
from external storage systems (e.g. file systems,
key-value stores, etc). Use SparkSession.readStream
to access this.
- Annotations
- @Evolving()
- Source
- DataStreamReader.scala
- Since
2.0.0
- Alphabetic
- By Inheritance
- DataStreamReader
- Logging
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- implicit class LogStringContext extends AnyRef
- Definition Classes
- Logging
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
- def csv(path: String): DataFrame
Loads a CSV file stream and returns the result as a
DataFrame
.Loads a CSV file stream and returns the result as a
DataFrame
.This function will go through the input once to determine the input schema if
inferSchema
is enabled. To avoid going through the entire data once, disableinferSchema
option or specify the schema explicitly usingschema
.You can set the following option(s):
maxFilesPerTrigger
(default: no max limit): sets the maximum number of new files to be considered in every trigger.maxBytesPerTrigger
(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
You can find the CSV-specific options for reading CSV file stream in Data Source Option in the version you use.
- Since
2.0.0
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def format(source: String): DataStreamReader
Specifies the input data source format.
Specifies the input data source format.
- Since
2.0.0
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
- def initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
- def json(path: String): DataFrame
Loads a JSON file stream and returns the results as a
DataFrame
.Loads a JSON file stream and returns the results as a
DataFrame
.JSON Lines (newline-delimited JSON) is supported by default. For JSON (one record per file), set the
multiLine
option to true.This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.
You can set the following option(s):
maxFilesPerTrigger
(default: no max limit): sets the maximum number of new files to be considered in every trigger.maxBytesPerTrigger
(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
You can find the JSON-specific options for reading JSON file stream in Data Source Option in the version you use.
- Since
2.0.0
- def load(path: String): DataFrame
Loads input in as a
DataFrame
, for data streams that read from some path.Loads input in as a
DataFrame
, for data streams that read from some path.- Since
2.0.0
- def load(): DataFrame
Loads input data stream in as a
DataFrame
, for data streams that don't require a path (e.g.Loads input data stream in as a
DataFrame
, for data streams that don't require a path (e.g. external key-value stores).- Since
2.0.0
- def log: Logger
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logName: String
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- def option(key: String, value: Double): DataStreamReader
Adds an input option for the underlying data source.
Adds an input option for the underlying data source.
- Since
2.0.0
- def option(key: String, value: Long): DataStreamReader
Adds an input option for the underlying data source.
Adds an input option for the underlying data source.
- Since
2.0.0
- def option(key: String, value: Boolean): DataStreamReader
Adds an input option for the underlying data source.
Adds an input option for the underlying data source.
- Since
2.0.0
- def option(key: String, value: String): DataStreamReader
Adds an input option for the underlying data source.
Adds an input option for the underlying data source.
- Since
2.0.0
- def options(options: Map[String, String]): DataStreamReader
(Java-specific) Adds input options for the underlying data source.
(Java-specific) Adds input options for the underlying data source.
- Since
2.0.0
- def options(options: Map[String, String]): DataStreamReader
(Scala-specific) Adds input options for the underlying data source.
(Scala-specific) Adds input options for the underlying data source.
- Since
2.0.0
- def orc(path: String): DataFrame
Loads a ORC file stream, returning the result as a
DataFrame
.Loads a ORC file stream, returning the result as a
DataFrame
.You can set the following option(s):
maxFilesPerTrigger
(default: no max limit): sets the maximum number of new files to be considered in every trigger.maxBytesPerTrigger
(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
ORC-specific option(s) for reading ORC file stream can be found in Data Source Option in the version you use.
- Since
2.3.0
- def parquet(path: String): DataFrame
Loads a Parquet file stream, returning the result as a
DataFrame
.Loads a Parquet file stream, returning the result as a
DataFrame
.You can set the following option(s):
maxFilesPerTrigger
(default: no max limit): sets the maximum number of new files to be considered in every trigger.maxBytesPerTrigger
(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
Parquet-specific option(s) for reading Parquet file stream can be found in Data Source Option in the version you use.
- Since
2.0.0
- def schema(schemaString: String): DataStreamReader
Specifies the schema by using the input DDL-formatted string.
Specifies the schema by using the input DDL-formatted string. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading.
- Since
2.3.0
- def schema(schema: StructType): DataStreamReader
Specifies the input schema.
Specifies the input schema. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading.
- Since
2.0.0
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def table(tableName: String): DataFrame
Define a Streaming DataFrame on a Table.
Define a Streaming DataFrame on a Table. The DataSource corresponding to the table should support streaming mode.
- tableName
The name of the table
- Since
3.1.0
- def text(path: String): DataFrame
Loads text files and returns a
DataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any.Loads text files and returns a
DataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any. The text files must be encoded as UTF-8.By default, each line in the text files is a new row in the resulting DataFrame. For example:
// Scala: spark.readStream.text("/path/to/directory/") // Java: spark.readStream().text("/path/to/directory/")
You can set the following option(s):
maxFilesPerTrigger
(default: no max limit): sets the maximum number of new files to be considered in every trigger.maxBytesPerTrigger
(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
You can find the text-specific options for reading text files in Data Source Option in the version you use.
- Since
2.0.0
- def textFile(path: String): Dataset[String]
Loads text file(s) and returns a
Dataset
of String.Loads text file(s) and returns a
Dataset
of String. The underlying schema of the Dataset contains a single string column named "value". The text files must be encoded as UTF-8.If the directory structure of the text files contains partitioning information, those are ignored in the resulting Dataset. To include partitioning information as columns, use
text
.By default, each line in the text file is a new element in the resulting Dataset. For example:
// Scala: spark.readStream.textFile("/path/to/spark/README.md") // Java: spark.readStream().textFile("/path/to/spark/README.md")
You can set the text-specific options as specified in
DataStreamReader.text
.- path
input path
- Since
2.1.0
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- def withLogContext(context: HashMap[String, String])(body: => Unit): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def xml(path: String): DataFrame
Loads a XML file stream and returns the result as a
DataFrame
.Loads a XML file stream and returns the result as a
DataFrame
.This function will go through the input once to determine the input schema if
inferSchema
is enabled. To avoid going through the entire data once, disableinferSchema
option or specify the schema explicitly usingschema
.You can set the following option(s):
maxFilesPerTrigger
(default: no max limit): sets the maximum number of new files to be considered in every trigger.maxBytesPerTrigger
(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
You can find the XML-specific options for reading XML file stream in Data Source Option in the version you use.
- Since
4.0.0
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
(Since version 9)