abstract class DataFrameReader extends AnyRef
Interface used to load a Dataset from external storage systems (e.g. file systems,
key-value stores, etc). Use SparkSession.read to access this.
- Annotations
- @Stable()
- Source
- DataFrameReader.scala
- Since
- 1.4.0 
- Alphabetic
- By Inheritance
- DataFrameReader
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
-  new DataFrameReader()
Abstract Value Members
-   abstract  def csv(csvDataset: Dataset[String]): DataFrameLoads an Dataset[String]storing CSV rows and returns the result as aDataFrame.Loads an Dataset[String]storing CSV rows and returns the result as aDataFrame.If the schema is not specified using schemafunction andinferSchemaoption is enabled, this function goes through the input once to determine the input schema.If the schema is not specified using schemafunction andinferSchemaoption is disabled, it determines the columns as string types and it reads only the first line to determine the names and the number of fields.If the enforceSchema is set to false, only the CSV header in the first line is checked to conform specified or inferred schema.- csvDataset
- input Dataset with one CSV row per record 
 - Since
- 2.2.0 
- Note
- if - headeroption is set to- truewhen calling this API, all lines same with the header will be removed if exists.
 
-   abstract  def jdbc(url: String, table: String, predicates: Array[String], connectionProperties: Properties): DataFrameConstruct a DataFramerepresenting the database table accessible via JDBC URL url named table using connection properties.Construct a DataFramerepresenting the database table accessible via JDBC URL url named table using connection properties. Thepredicatesparameter gives a list expressions suitable for inclusion in WHERE clauses; each one defines one partition of theDataFrame.Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems. You can find the JDBC-specific option and parameter documentation for reading tables via JDBC in <a href="https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html#data-source-option"> Data Source Option in the version you use. - table
- Name of the table in the external database. 
- predicates
- Condition in the where clause for each partition. 
- connectionProperties
- JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included. "fetchsize" can be used to control the number of rows per fetch. 
 - Since
- 1.4.0 
 
-   abstract  def json(jsonDataset: Dataset[String]): DataFrameLoads a Dataset[String]storing JSON objects (JSON Lines text format or newline-delimited JSON) and returns the result as aDataFrame.Loads a Dataset[String]storing JSON objects (JSON Lines text format or newline-delimited JSON) and returns the result as aDataFrame.Unless the schema is specified using schemafunction, this function goes through the input once to determine the input schema.- jsonDataset
- input Dataset with one JSON object per record 
 - Since
- 2.2.0 
 
-   abstract  def load(paths: String*): DataFrameLoads input in as a DataFrame, for data sources that support multiple paths.Loads input in as a DataFrame, for data sources that support multiple paths. Only works if the source is a HadoopFsRelationProvider.- Annotations
- @varargs()
- Since
- 1.6.0 
 
-   abstract  def load(path: String): DataFrameLoads input in as a DataFrame, for data sources that require a path (e.g.Loads input in as a DataFrame, for data sources that require a path (e.g. data backed by a local or distributed file system).- Since
- 1.4.0 
 
-   abstract  def load(): DataFrameLoads input in as a DataFrame, for data sources that don't require a path (e.g.Loads input in as a DataFrame, for data sources that don't require a path (e.g. external key-value stores).- Since
- 1.4.0 
 
-   abstract  def table(tableName: String): DataFrameReturns the specified table/view as a DataFrame.Returns the specified table/view as a DataFrame. If it's a table, it must support batch reading and the returned DataFrame is the batch scan query plan of this table. If it's a view, the returned DataFrame is simply the query plan of the view, which can either be a batch or streaming query plan.- tableName
- is either a qualified or unqualified name that designates a table or view. If a database is specified, it identifies the table/view from the database. Otherwise, it first attempts to find a temporary view with the given name and then match the table/view from the current database. Note that, the global temporary view database is also valid here. 
 - Since
- 1.4.0 
 
-   abstract  def xml(xmlDataset: Dataset[String]): DataFrameLoads an Dataset[String]storing XML object and returns the result as aDataFrame.Loads an Dataset[String]storing XML object and returns the result as aDataFrame.If the schema is not specified using schemafunction andinferSchemaoption is enabled, this function goes through the input once to determine the input schema.- xmlDataset
- input Dataset with one XML object per record 
 - Since
- 4.0.0 
 
-   abstract  def json(jsonRDD: RDD[String]): DataFrameLoads an RDD[String]storing JSON objects (JSON Lines text format or newline-delimited JSON) and returns the result as aDataFrame.Loads an RDD[String]storing JSON objects (JSON Lines text format or newline-delimited JSON) and returns the result as aDataFrame.Unless the schema is specified using schemafunction, this function goes through the input once to determine the input schema.- jsonRDD
- input RDD with one JSON object per record 
 - Annotations
- @deprecated
- Deprecated
- (Since version 2.2.0) Use json(Dataset[String]) instead. 
- Since
- 1.4.0 
- Note
- this method is not supported in Spark Connect. 
 
-   abstract  def json(jsonRDD: JavaRDD[String]): DataFrameLoads a JavaRDD[String]storing JSON objects (JSON Lines text format or newline-delimited JSON) and returns the result as aDataFrame.Loads a JavaRDD[String]storing JSON objects (JSON Lines text format or newline-delimited JSON) and returns the result as aDataFrame.Unless the schema is specified using schemafunction, this function goes through the input once to determine the input schema.- jsonRDD
- input RDD with one JSON object per record 
 - Annotations
- @deprecated
- Deprecated
- (Since version 2.2.0) Use json(Dataset[String]) instead. 
- Since
- 1.4.0 
- Note
- this method is not supported in Spark Connect. 
 
Concrete Value Members
-   final  def !=(arg0: Any): Boolean- Definition Classes
- AnyRef → Any
 
-   final  def ##: Int- Definition Classes
- AnyRef → Any
 
-   final  def ==(arg0: Any): Boolean- Definition Classes
- AnyRef → Any
 
-   final  def asInstanceOf[T0]: T0- Definition Classes
- Any
 
-    def assertNoSpecifiedSchema(operation: String): UnitA convenient function for schema validation in APIs. A convenient function for schema validation in APIs. - Attributes
- protected
 
-    def clone(): AnyRef- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
 
-    def csv(paths: String*): DataFrameLoads CSV files and returns the result as a DataFrame.Loads CSV files and returns the result as a DataFrame.This function will go through the input once to determine the input schema if inferSchemais enabled. To avoid going through the entire data once, disableinferSchemaoption or specify the schema explicitly usingschema.You can find the CSV-specific options for reading CSV files in <a href="https://spark.apache.org/docs/latest/sql-data-sources-csv.html#data-source-option"> Data Source Option in the version you use. - Annotations
- @varargs()
- Since
- 2.0.0 
 
-    def csv(path: String): DataFrameLoads a CSV file and returns the result as a DataFrame.Loads a CSV file and returns the result as a DataFrame. See the documentation on the other overloadedcsv()method for more details.- Since
- 2.0.0 
 
-   final  def eq(arg0: AnyRef): Boolean- Definition Classes
- AnyRef
 
-    def equals(arg0: AnyRef): Boolean- Definition Classes
- AnyRef → Any
 
-    var extraOptions: CaseInsensitiveMap[String]- Attributes
- protected
 
-    def format(source: String): DataFrameReader.this.typeSpecifies the input data source format. Specifies the input data source format. - Since
- 1.4.0 
 
-   final  def getClass(): Class[_ <: AnyRef]- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
 
-    def hashCode(): Int- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
 
-   final  def isInstanceOf[T0]: Boolean- Definition Classes
- Any
 
-    def jdbc(url: String, table: String, columnName: String, lowerBound: Long, upperBound: Long, numPartitions: Int, connectionProperties: Properties): DataFrameConstruct a DataFramerepresenting the database table accessible via JDBC URL url named table.Construct a DataFramerepresenting the database table accessible via JDBC URL url named table. Partitions of the table will be retrieved in parallel based on the parameters passed to this function.Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems. You can find the JDBC-specific option and parameter documentation for reading tables via JDBC in <a href="https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html#data-source-option"> Data Source Option in the version you use. - table
- Name of the table in the external database. 
- columnName
- Alias of - partitionColumnoption. Refer to- partitionColumnin <a href="https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html#data-source-option"> Data Source Option in the version you use.
- connectionProperties
- JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included. "fetchsize" can be used to control the number of rows per fetch and "queryTimeout" can be used to wait for a Statement object to execute to the given number of seconds. 
 - Since
- 1.4.0 
 
-    def jdbc(url: String, table: String, properties: Properties): DataFrameConstruct a DataFramerepresenting the database table accessible via JDBC URL url named table and connection properties.Construct a DataFramerepresenting the database table accessible via JDBC URL url named table and connection properties.You can find the JDBC-specific option and parameter documentation for reading tables via JDBC in <a href="https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html#data-source-option"> Data Source Option in the version you use. - Since
- 1.4.0 
 
-    def json(paths: String*): DataFrameLoads JSON files and returns the results as a DataFrame.Loads JSON files and returns the results as a DataFrame.JSON Lines (newline-delimited JSON) is supported by default. For JSON (one record per file), set the multiLineoption to true.This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan. You can find the JSON-specific options for reading JSON files in <a href="https://spark.apache.org/docs/latest/sql-data-sources-json.html#data-source-option"> Data Source Option in the version you use. - Annotations
- @varargs()
- Since
- 2.0.0 
 
-    def json(path: String): DataFrameLoads a JSON file and returns the results as a DataFrame.Loads a JSON file and returns the results as a DataFrame.See the documentation on the overloaded json()method with varargs for more details.- Since
- 1.4.0 
 
-   final  def ne(arg0: AnyRef): Boolean- Definition Classes
- AnyRef
 
-   final  def notify(): Unit- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
 
-   final  def notifyAll(): Unit- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
 
-    def option(key: String, value: Double): DataFrameReader.this.typeAdds an input option for the underlying data source. Adds an input option for the underlying data source. All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option. - Since
- 2.0.0 
 
-    def option(key: String, value: Long): DataFrameReader.this.typeAdds an input option for the underlying data source. Adds an input option for the underlying data source. All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option. - Since
- 2.0.0 
 
-    def option(key: String, value: Boolean): DataFrameReader.this.typeAdds an input option for the underlying data source. Adds an input option for the underlying data source. All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option. - Since
- 2.0.0 
 
-    def option(key: String, value: String): DataFrameReader.this.typeAdds an input option for the underlying data source. Adds an input option for the underlying data source. All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option. - Since
- 1.4.0 
 
-    def options(opts: Map[String, String]): DataFrameReader.this.typeAdds input options for the underlying data source. Adds input options for the underlying data source. All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option. - Since
- 1.4.0 
 
-    def options(options: Map[String, String]): DataFrameReader.this.type(Scala-specific) Adds input options for the underlying data source. (Scala-specific) Adds input options for the underlying data source. All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option. - Since
- 1.4.0 
 
-    def orc(paths: String*): DataFrameLoads ORC files and returns the result as a DataFrame.Loads ORC files and returns the result as a DataFrame.ORC-specific option(s) for reading ORC files can be found in Data Source Option in the version you use. - paths
- input paths 
 - Annotations
- @varargs()
- Since
- 2.0.0 
 
-    def orc(path: String): DataFrameLoads an ORC file and returns the result as a DataFrame.Loads an ORC file and returns the result as a DataFrame.- path
- input path 
 - Since
- 1.5.0 
 
-    def parquet(paths: String*): DataFrameLoads a Parquet file, returning the result as a DataFrame.Loads a Parquet file, returning the result as a DataFrame.Parquet-specific option(s) for reading Parquet files can be found in Data Source Option in the version you use. - Annotations
- @varargs()
- Since
- 1.4.0 
 
-    def parquet(path: String): DataFrameLoads a Parquet file, returning the result as a DataFrame.Loads a Parquet file, returning the result as a DataFrame. See the documentation on the other overloadedparquet()method for more details.- Since
- 2.0.0 
 
-    def schema(schemaString: String): DataFrameReader.this.typeSpecifies the schema by using the input DDL-formatted string. Specifies the schema by using the input DDL-formatted string. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading. spark.read.schema("a INT, b STRING, c DOUBLE").csv("test.csv") - Since
- 2.3.0 
 
-    def schema(schema: StructType): DataFrameReader.this.typeSpecifies the input schema. Specifies the input schema. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading. - Since
- 1.4.0 
 
-    var source: String- Attributes
- protected
 
-   final  def synchronized[T0](arg0: => T0): T0- Definition Classes
- AnyRef
 
-    def text(paths: String*): DataFrameLoads text files and returns a DataFramewhose schema starts with a string column named "value", and followed by partitioned columns if there are any.Loads text files and returns a DataFramewhose schema starts with a string column named "value", and followed by partitioned columns if there are any. The text files must be encoded as UTF-8.By default, each line in the text files is a new row in the resulting DataFrame. For example: // Scala: spark.read.text("/path/to/spark/README.md") // Java: spark.read().text("/path/to/spark/README.md") You can find the text-specific options for reading text files in <a href="https://spark.apache.org/docs/latest/sql-data-sources-text.html#data-source-option"> Data Source Option in the version you use. - paths
- input paths 
 - Annotations
- @varargs()
- Since
- 1.6.0 
 
-    def text(path: String): DataFrameLoads text files and returns a DataFramewhose schema starts with a string column named "value", and followed by partitioned columns if there are any.Loads text files and returns a DataFramewhose schema starts with a string column named "value", and followed by partitioned columns if there are any. See the documentation on the other overloadedtext()method for more details.- Since
- 2.0.0 
 
-    def textFile(paths: String*): Dataset[String]Loads text files and returns a Dataset of String. Loads text files and returns a Dataset of String. The underlying schema of the Dataset contains a single string column named "value". The text files must be encoded as UTF-8. If the directory structure of the text files contains partitioning information, those are ignored in the resulting Dataset. To include partitioning information as columns, use text.By default, each line in the text files is a new row in the resulting DataFrame. For example: // Scala: spark.read.textFile("/path/to/spark/README.md") // Java: spark.read().textFile("/path/to/spark/README.md") You can set the text-specific options as specified in DataFrameReader.text.- paths
- input path 
 - Annotations
- @varargs()
- Since
- 2.0.0 
 
-    def textFile(path: String): Dataset[String]Loads text files and returns a Dataset of String. Loads text files and returns a Dataset of String. See the documentation on the other overloaded textFile()method for more details.- Since
- 2.0.0 
 
-    def toString(): String- Definition Classes
- AnyRef → Any
 
-    var userSpecifiedSchema: Option[StructType]- Attributes
- protected
 
-    def validateJsonSchema(): Unit- Attributes
- protected
 
-    def validateSingleVariantColumn(): UnitEnsure that the singleVariantColumnoption cannot be used if there is also a user specified schema.Ensure that the singleVariantColumnoption cannot be used if there is also a user specified schema.- Attributes
- protected
 
-    def validateXmlSchema(): Unit- Attributes
- protected
 
-   final  def wait(arg0: Long, arg1: Int): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
 
-   final  def wait(arg0: Long): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
 
-   final  def wait(): Unit- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
 
-    def xml(paths: String*): DataFrameLoads XML files and returns the result as a DataFrame.Loads XML files and returns the result as a DataFrame.This function will go through the input once to determine the input schema if inferSchemais enabled. To avoid going through the entire data once, disableinferSchemaoption or specify the schema explicitly usingschema.You can find the XML-specific options for reading XML files in <a href="https://spark.apache.org/docs/latest/sql-data-sources-xml.html#data-source-option"> Data Source Option in the version you use. - Annotations
- @varargs()
- Since
- 4.0.0 
 
-    def xml(path: String): DataFrameLoads a XML file and returns the result as a DataFrame.Loads a XML file and returns the result as a DataFrame. See the documentation on the other overloadedxml()method for more details.- Since
- 4.0.0 
 
Deprecated Value Members
-    def finalize(): Unit- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
- (Since version 9)