Class DataFrameReader
Dataset
from external storage systems (e.g. file systems,
key-value stores, etc). Use SparkSession.read
to access this.
- Since:
- 1.4.0
-
Method Summary
Modifier and TypeMethodDescriptionLoads a CSV file and returns the result as aDataFrame
.Loads CSV files and returns the result as aDataFrame
.Loads CSV files and returns the result as aDataFrame
.Specifies the input data source format.jdbc
(String url, String table, String[] predicates, Properties connectionProperties) Construct aDataFrame
representing the database table accessible via JDBC URL url named table using connection properties.jdbc
(String url, String table, String columnName, long lowerBound, long upperBound, int numPartitions, Properties connectionProperties) Construct aDataFrame
representing the database table accessible via JDBC URL url named table.jdbc
(String url, String table, Properties properties) Construct aDataFrame
representing the database table accessible via JDBC URL url named table and connection properties.Loads a JSON file and returns the results as aDataFrame
.Loads JSON files and returns the results as aDataFrame
.Deprecated.Use json(Dataset[String]) instead.Deprecated.Use json(Dataset[String]) instead.Loads JSON files and returns the results as aDataFrame
.load()
Loads input in as aDataFrame
, for data sources that don't require a path (e.g.Loads input in as aDataFrame
, for data sources that require a path (e.g.Loads input in as aDataFrame
, for data sources that support multiple paths.Loads input in as aDataFrame
, for data sources that support multiple paths.Adds an input option for the underlying data source.Adds an input option for the underlying data source.Adds an input option for the underlying data source.Adds an input option for the underlying data source.Adds input options for the underlying data source.(Scala-specific) Adds input options for the underlying data source.Loads an ORC file and returns the result as aDataFrame
.Loads ORC files and returns the result as aDataFrame
.Loads ORC files and returns the result as aDataFrame
.Loads a Parquet file, returning the result as aDataFrame
.Loads a Parquet file, returning the result as aDataFrame
.Loads a Parquet file, returning the result as aDataFrame
.Specifies the schema by using the input DDL-formatted string.schema
(StructType schema) Specifies the input schema.Returns the specified table/view as aDataFrame
.Loads text files and returns aDataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any.Loads text files and returns aDataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any.Loads text files and returns aDataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any.Loads text files and returns aDataset
of String.Loads text files and returns aDataset
of String.Loads text files and returns aDataset
of String.Loads a XML file and returns the result as aDataFrame
.Loads XML files and returns the result as aDataFrame
.Loads XML files and returns the result as aDataFrame
.Methods inherited from class org.apache.spark.sql.api.DataFrameReader
csv, json, xml
-
Method Details
-
csv
Description copied from class:DataFrameReader
Loads CSV files and returns the result as aDataFrame
.This function will go through the input once to determine the input schema if
inferSchema
is enabled. To avoid going through the entire data once, disableinferSchema
option or specify the schema explicitly usingschema
.You can find the CSV-specific options for reading CSV files in Data Source Option in the version you use.
- Overrides:
csv
in classDataFrameReader<Dataset>
- Parameters:
paths
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
csv
Description copied from class:DataFrameReader
Loads a CSV file and returns the result as aDataFrame
. See the documentation on the other overloadedcsv()
method for more details.- Overrides:
csv
in classDataFrameReader<Dataset>
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
csv
- Inheritdoc:
-
csv
Description copied from class:DataFrameReader
Loads CSV files and returns the result as aDataFrame
.This function will go through the input once to determine the input schema if
inferSchema
is enabled. To avoid going through the entire data once, disableinferSchema
option or specify the schema explicitly usingschema
.You can find the CSV-specific options for reading CSV files in Data Source Option in the version you use.
- Overrides:
csv
in classDataFrameReader<Dataset>
- Parameters:
paths
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
format
Description copied from class:DataFrameReader
Specifies the input data source format.- Overrides:
format
in classDataFrameReader<Dataset>
- Parameters:
source
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
jdbc
Description copied from class:DataFrameReader
Construct aDataFrame
representing the database table accessible via JDBC URL url named table and connection properties.You can find the JDBC-specific option and parameter documentation for reading tables via JDBC in Data Source Option in the version you use.
- Overrides:
jdbc
in classDataFrameReader<Dataset>
- Parameters:
url
- (undocumented)table
- (undocumented)properties
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
jdbc
public Dataset<Row> jdbc(String url, String table, String columnName, long lowerBound, long upperBound, int numPartitions, Properties connectionProperties) Description copied from class:DataFrameReader
Construct aDataFrame
representing the database table accessible via JDBC URL url named table. Partitions of the table will be retrieved in parallel based on the parameters passed to this function.Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.
You can find the JDBC-specific option and parameter documentation for reading tables via JDBC in Data Source Option in the version you use.
- Overrides:
jdbc
in classDataFrameReader<Dataset>
- Parameters:
url
- (undocumented)table
- Name of the table in the external database.columnName
- Alias ofpartitionColumn
option. Refer topartitionColumn
in Data Source Option in the version you use.lowerBound
- (undocumented)upperBound
- (undocumented)numPartitions
- (undocumented)connectionProperties
- JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included. "fetchsize" can be used to control the number of rows per fetch and "queryTimeout" can be used to wait for a Statement object to execute to the given number of seconds.- Returns:
- (undocumented)
- Inheritdoc:
-
jdbc
public Dataset<Row> jdbc(String url, String table, String[] predicates, Properties connectionProperties) Description copied from class:DataFrameReader
Construct aDataFrame
representing the database table accessible via JDBC URL url named table using connection properties. Thepredicates
parameter gives a list expressions suitable for inclusion in WHERE clauses; each one defines one partition of theDataFrame
.Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.
You can find the JDBC-specific option and parameter documentation for reading tables via JDBC in Data Source Option in the version you use.
- Specified by:
jdbc
in classDataFrameReader<Dataset>
- Parameters:
url
- (undocumented)table
- Name of the table in the external database.predicates
- Condition in the where clause for each partition.connectionProperties
- JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included. "fetchsize" can be used to control the number of rows per fetch.- Returns:
- (undocumented)
- Inheritdoc:
-
json
Description copied from class:DataFrameReader
Loads JSON files and returns the results as aDataFrame
.JSON Lines (newline-delimited JSON) is supported by default. For JSON (one record per file), set the
multiLine
option to true.This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.
You can find the JSON-specific options for reading JSON files in Data Source Option in the version you use.
- Overrides:
json
in classDataFrameReader<Dataset>
- Parameters:
paths
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
json
Description copied from class:DataFrameReader
Loads a JSON file and returns the results as aDataFrame
.See the documentation on the overloaded
json()
method with varargs for more details.- Overrides:
json
in classDataFrameReader<Dataset>
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
json
Description copied from class:DataFrameReader
Loads JSON files and returns the results as aDataFrame
.JSON Lines (newline-delimited JSON) is supported by default. For JSON (one record per file), set the
multiLine
option to true.This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.
You can find the JSON-specific options for reading JSON files in Data Source Option in the version you use.
- Overrides:
json
in classDataFrameReader<Dataset>
- Parameters:
paths
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
json
Deprecated.Use json(Dataset[String]) instead. Since 2.2.0.Loads aJavaRDD[String]
storing JSON objects (JSON Lines text format or newline-delimited JSON) and returns the result as aDataFrame
.Unless the schema is specified using
schema
function, this function goes through the input once to determine the input schema.- Parameters:
jsonRDD
- input RDD with one JSON object per record- Returns:
- (undocumented)
- Since:
- 1.4.0
-
json
Deprecated.Use json(Dataset[String]) instead. Since 2.2.0.Loads anRDD[String]
storing JSON objects (JSON Lines text format or newline-delimited JSON) and returns the result as aDataFrame
.Unless the schema is specified using
schema
function, this function goes through the input once to determine the input schema.- Parameters:
jsonRDD
- input RDD with one JSON object per record- Returns:
- (undocumented)
- Since:
- 1.4.0
-
json
- Inheritdoc:
-
load
Description copied from class:DataFrameReader
Loads input in as aDataFrame
, for data sources that support multiple paths. Only works if the source is a HadoopFsRelationProvider.- Overrides:
load
in classDataFrameReader<Dataset>
- Parameters:
paths
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
load
Description copied from class:DataFrameReader
Loads input in as aDataFrame
, for data sources that don't require a path (e.g. external key-value stores).- Specified by:
load
in classDataFrameReader<Dataset>
- Returns:
- (undocumented)
- Inheritdoc:
-
load
Description copied from class:DataFrameReader
Loads input in as aDataFrame
, for data sources that require a path (e.g. data backed by a local or distributed file system).- Specified by:
load
in classDataFrameReader<Dataset>
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
load
Description copied from class:DataFrameReader
Loads input in as aDataFrame
, for data sources that support multiple paths. Only works if the source is a HadoopFsRelationProvider.- Specified by:
load
in classDataFrameReader<Dataset>
- Parameters:
paths
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
option
Description copied from class:DataFrameReader
Adds an input option for the underlying data source.All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.
- Overrides:
option
in classDataFrameReader<Dataset>
- Parameters:
key
- (undocumented)value
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
option
Description copied from class:DataFrameReader
Adds an input option for the underlying data source.All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.
- Overrides:
option
in classDataFrameReader<Dataset>
- Parameters:
key
- (undocumented)value
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
option
Description copied from class:DataFrameReader
Adds an input option for the underlying data source.All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.
- Overrides:
option
in classDataFrameReader<Dataset>
- Parameters:
key
- (undocumented)value
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
option
Description copied from class:DataFrameReader
Adds an input option for the underlying data source.All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.
- Overrides:
option
in classDataFrameReader<Dataset>
- Parameters:
key
- (undocumented)value
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
options
Description copied from class:DataFrameReader
(Scala-specific) Adds input options for the underlying data source.All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.
- Overrides:
options
in classDataFrameReader<Dataset>
- Parameters:
options
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
options
Description copied from class:DataFrameReader
Adds input options for the underlying data source.All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.
- Overrides:
options
in classDataFrameReader<Dataset>
- Parameters:
options
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
orc
Description copied from class:DataFrameReader
Loads ORC files and returns the result as aDataFrame
.ORC-specific option(s) for reading ORC files can be found in Data Source Option in the version you use.
- Overrides:
orc
in classDataFrameReader<Dataset>
- Parameters:
paths
- input paths- Returns:
- (undocumented)
- Inheritdoc:
-
orc
Description copied from class:DataFrameReader
Loads an ORC file and returns the result as aDataFrame
.- Overrides:
orc
in classDataFrameReader<Dataset>
- Parameters:
path
- input path- Returns:
- (undocumented)
- Inheritdoc:
-
orc
Description copied from class:DataFrameReader
Loads ORC files and returns the result as aDataFrame
.ORC-specific option(s) for reading ORC files can be found in Data Source Option in the version you use.
- Overrides:
orc
in classDataFrameReader<Dataset>
- Parameters:
paths
- input paths- Returns:
- (undocumented)
- Inheritdoc:
-
parquet
Description copied from class:DataFrameReader
Loads a Parquet file, returning the result as aDataFrame
.Parquet-specific option(s) for reading Parquet files can be found in Data Source Option in the version you use.
- Overrides:
parquet
in classDataFrameReader<Dataset>
- Parameters:
paths
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
parquet
Description copied from class:DataFrameReader
Loads a Parquet file, returning the result as aDataFrame
. See the documentation on the other overloadedparquet()
method for more details.- Overrides:
parquet
in classDataFrameReader<Dataset>
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
parquet
Description copied from class:DataFrameReader
Loads a Parquet file, returning the result as aDataFrame
.Parquet-specific option(s) for reading Parquet files can be found in Data Source Option in the version you use.
- Overrides:
parquet
in classDataFrameReader<Dataset>
- Parameters:
paths
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
schema
Description copied from class:DataFrameReader
Specifies the input schema. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading.- Overrides:
schema
in classDataFrameReader<Dataset>
- Parameters:
schema
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
schema
Description copied from class:DataFrameReader
Specifies the schema by using the input DDL-formatted string. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading.spark.read.schema("a INT, b STRING, c DOUBLE").csv("test.csv")
- Overrides:
schema
in classDataFrameReader<Dataset>
- Parameters:
schemaString
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
table
Description copied from class:DataFrameReader
Returns the specified table/view as aDataFrame
. If it's a table, it must support batch reading and the returned DataFrame is the batch scan query plan of this table. If it's a view, the returned DataFrame is simply the query plan of the view, which can either be a batch or streaming query plan.- Specified by:
table
in classDataFrameReader<Dataset>
- Parameters:
tableName
- is either a qualified or unqualified name that designates a table or view. If a database is specified, it identifies the table/view from the database. Otherwise, it first attempts to find a temporary view with the given name and then match the table/view from the current database. Note that, the global temporary view database is also valid here.- Returns:
- (undocumented)
- Inheritdoc:
-
text
Description copied from class:DataFrameReader
Loads text files and returns aDataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any. The text files must be encoded as UTF-8.By default, each line in the text files is a new row in the resulting DataFrame. For example:
// Scala: spark.read.text("/path/to/spark/README.md") // Java: spark.read().text("/path/to/spark/README.md")
You can find the text-specific options for reading text files in Data Source Option in the version you use.
- Overrides:
text
in classDataFrameReader<Dataset>
- Parameters:
paths
- input paths- Returns:
- (undocumented)
- Inheritdoc:
-
text
Description copied from class:DataFrameReader
Loads text files and returns aDataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any. See the documentation on the other overloadedtext()
method for more details.- Overrides:
text
in classDataFrameReader<Dataset>
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
text
Description copied from class:DataFrameReader
Loads text files and returns aDataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any. The text files must be encoded as UTF-8.By default, each line in the text files is a new row in the resulting DataFrame. For example:
// Scala: spark.read.text("/path/to/spark/README.md") // Java: spark.read().text("/path/to/spark/README.md")
You can find the text-specific options for reading text files in Data Source Option in the version you use.
- Overrides:
text
in classDataFrameReader<Dataset>
- Parameters:
paths
- input paths- Returns:
- (undocumented)
- Inheritdoc:
-
textFile
Description copied from class:DataFrameReader
Loads text files and returns aDataset
of String. The underlying schema of the Dataset contains a single string column named "value". The text files must be encoded as UTF-8.If the directory structure of the text files contains partitioning information, those are ignored in the resulting Dataset. To include partitioning information as columns, use
text
.By default, each line in the text files is a new row in the resulting DataFrame. For example:
// Scala: spark.read.textFile("/path/to/spark/README.md") // Java: spark.read().textFile("/path/to/spark/README.md")
You can set the text-specific options as specified in
DataFrameReader.text
.- Overrides:
textFile
in classDataFrameReader<Dataset>
- Parameters:
paths
- input path- Returns:
- (undocumented)
- Inheritdoc:
-
textFile
Description copied from class:DataFrameReader
Loads text files and returns aDataset
of String. See the documentation on the other overloadedtextFile()
method for more details.- Overrides:
textFile
in classDataFrameReader<Dataset>
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
textFile
Description copied from class:DataFrameReader
Loads text files and returns aDataset
of String. The underlying schema of the Dataset contains a single string column named "value". The text files must be encoded as UTF-8.If the directory structure of the text files contains partitioning information, those are ignored in the resulting Dataset. To include partitioning information as columns, use
text
.By default, each line in the text files is a new row in the resulting DataFrame. For example:
// Scala: spark.read.textFile("/path/to/spark/README.md") // Java: spark.read().textFile("/path/to/spark/README.md")
You can set the text-specific options as specified in
DataFrameReader.text
.- Overrides:
textFile
in classDataFrameReader<Dataset>
- Parameters:
paths
- input path- Returns:
- (undocumented)
- Inheritdoc:
-
xml
Description copied from class:DataFrameReader
Loads XML files and returns the result as aDataFrame
.This function will go through the input once to determine the input schema if
inferSchema
is enabled. To avoid going through the entire data once, disableinferSchema
option or specify the schema explicitly usingschema
.You can find the XML-specific options for reading XML files in Data Source Option in the version you use.
- Overrides:
xml
in classDataFrameReader<Dataset>
- Parameters:
paths
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
xml
Description copied from class:DataFrameReader
Loads a XML file and returns the result as aDataFrame
. See the documentation on the other overloadedxml()
method for more details.- Overrides:
xml
in classDataFrameReader<Dataset>
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
xml
Description copied from class:DataFrameReader
Loads XML files and returns the result as aDataFrame
.This function will go through the input once to determine the input schema if
inferSchema
is enabled. To avoid going through the entire data once, disableinferSchema
option or specify the schema explicitly usingschema
.You can find the XML-specific options for reading XML files in Data Source Option in the version you use.
- Overrides:
xml
in classDataFrameReader<Dataset>
- Parameters:
paths
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
xml
- Inheritdoc:
-