public class DataFrameReader
extends Object
implements org.apache.spark.internal.Logging
Dataset
from external storage systems (e.g. file systems,
key-value stores, etc). Use SparkSession.read
to access this.
Modifier and Type | Method and Description |
---|---|
Dataset<Row> |
csv(Dataset<String> csvDataset)
Loads an
Dataset[String] storing CSV rows and returns the result as a DataFrame . |
Dataset<Row> |
csv(scala.collection.Seq<String> paths)
Loads CSV files and returns the result as a
DataFrame . |
Dataset<Row> |
csv(String... paths)
Loads CSV files and returns the result as a
DataFrame . |
Dataset<Row> |
csv(String path)
Loads a CSV file and returns the result as a
DataFrame . |
DataFrameReader |
format(String source)
Specifies the input data source format.
|
Dataset<Row> |
jdbc(String url,
String table,
java.util.Properties properties)
Construct a
DataFrame representing the database table accessible via JDBC URL
url named table and connection properties. |
Dataset<Row> |
jdbc(String url,
String table,
String[] predicates,
java.util.Properties connectionProperties)
Construct a
DataFrame representing the database table accessible via JDBC URL
url named table using connection properties. |
Dataset<Row> |
jdbc(String url,
String table,
String columnName,
long lowerBound,
long upperBound,
int numPartitions,
java.util.Properties connectionProperties)
Construct a
DataFrame representing the database table accessible via JDBC URL
url named table. |
Dataset<Row> |
json(Dataset<String> jsonDataset)
Loads a
Dataset[String] storing JSON objects (JSON Lines
text format or newline-delimited JSON) and returns the result as a DataFrame . |
Dataset<Row> |
json(JavaRDD<String> jsonRDD)
Deprecated.
Use json(Dataset[String]) instead. Since 2.2.0.
|
Dataset<Row> |
json(RDD<String> jsonRDD)
Deprecated.
Use json(Dataset[String]) instead. Since 2.2.0.
|
Dataset<Row> |
json(scala.collection.Seq<String> paths)
Loads JSON files and returns the results as a
DataFrame . |
Dataset<Row> |
json(String... paths)
Loads JSON files and returns the results as a
DataFrame . |
Dataset<Row> |
json(String path)
Loads a JSON file and returns the results as a
DataFrame . |
Dataset<Row> |
load()
Loads input in as a
DataFrame , for data sources that don't require a path (e.g. |
Dataset<Row> |
load(scala.collection.Seq<String> paths)
Loads input in as a
DataFrame , for data sources that support multiple paths. |
Dataset<Row> |
load(String... paths)
Loads input in as a
DataFrame , for data sources that support multiple paths. |
Dataset<Row> |
load(String path)
Loads input in as a
DataFrame , for data sources that require a path (e.g. |
DataFrameReader |
option(String key,
boolean value)
Adds an input option for the underlying data source.
|
DataFrameReader |
option(String key,
double value)
Adds an input option for the underlying data source.
|
DataFrameReader |
option(String key,
long value)
Adds an input option for the underlying data source.
|
DataFrameReader |
option(String key,
String value)
Adds an input option for the underlying data source.
|
DataFrameReader |
options(scala.collection.Map<String,String> options)
(Scala-specific) Adds input options for the underlying data source.
|
DataFrameReader |
options(java.util.Map<String,String> options)
Adds input options for the underlying data source.
|
Dataset<Row> |
orc(scala.collection.Seq<String> paths)
Loads ORC files and returns the result as a
DataFrame . |
Dataset<Row> |
orc(String... paths)
Loads ORC files and returns the result as a
DataFrame . |
Dataset<Row> |
orc(String path)
Loads an ORC file and returns the result as a
DataFrame . |
Dataset<Row> |
parquet(scala.collection.Seq<String> paths)
Loads a Parquet file, returning the result as a
DataFrame . |
Dataset<Row> |
parquet(String... paths)
Loads a Parquet file, returning the result as a
DataFrame . |
Dataset<Row> |
parquet(String path)
Loads a Parquet file, returning the result as a
DataFrame . |
DataFrameReader |
schema(String schemaString)
Specifies the schema by using the input DDL-formatted string.
|
DataFrameReader |
schema(StructType schema)
Specifies the input schema.
|
Dataset<Row> |
table(String tableName)
Returns the specified table/view as a
DataFrame . |
Dataset<Row> |
text(scala.collection.Seq<String> paths)
Loads text files and returns a
DataFrame whose schema starts with a string column named
"value", and followed by partitioned columns if there are any. |
Dataset<Row> |
text(String... paths)
Loads text files and returns a
DataFrame whose schema starts with a string column named
"value", and followed by partitioned columns if there are any. |
Dataset<Row> |
text(String path)
Loads text files and returns a
DataFrame whose schema starts with a string column named
"value", and followed by partitioned columns if there are any. |
Dataset<String> |
textFile(scala.collection.Seq<String> paths)
Loads text files and returns a
Dataset of String. |
Dataset<String> |
textFile(String... paths)
Loads text files and returns a
Dataset of String. |
Dataset<String> |
textFile(String path)
Loads text files and returns a
Dataset of String. |
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize
public Dataset<Row> csv(String... paths)
DataFrame
.
This function will go through the input once to determine the input schema if inferSchema
is enabled. To avoid going through the entire data once, disable inferSchema
option or
specify the schema explicitly using schema
.
You can find the CSV-specific options for reading CSV files in Data Source Option in the version you use.
paths
- (undocumented)public Dataset<Row> csv(String path)
DataFrame
. See the documentation on the
other overloaded csv()
method for more details.
path
- (undocumented)public Dataset<Row> csv(Dataset<String> csvDataset)
Dataset[String]
storing CSV rows and returns the result as a DataFrame
.
If the schema is not specified using schema
function and inferSchema
option is enabled,
this function goes through the input once to determine the input schema.
If the schema is not specified using schema
function and inferSchema
option is disabled,
it determines the columns as string types and it reads only the first line to determine the
names and the number of fields.
If the enforceSchema is set to false
, only the CSV header in the first line is checked
to conform specified or inferred schema.
csvDataset
- input Dataset with one CSV row per recordheader
option is set to true
when calling this API, all lines same with
the header will be removed if exists.
public Dataset<Row> csv(scala.collection.Seq<String> paths)
DataFrame
.
This function will go through the input once to determine the input schema if inferSchema
is enabled. To avoid going through the entire data once, disable inferSchema
option or
specify the schema explicitly using schema
.
You can find the CSV-specific options for reading CSV files in Data Source Option in the version you use.
paths
- (undocumented)public DataFrameReader format(String source)
source
- (undocumented)public Dataset<Row> jdbc(String url, String table, java.util.Properties properties)
DataFrame
representing the database table accessible via JDBC URL
url named table and connection properties.
You can find the JDBC-specific option and parameter documentation for reading tables via JDBC in Data Source Option in the version you use.
url
- (undocumented)table
- (undocumented)properties
- (undocumented)public Dataset<Row> jdbc(String url, String table, String columnName, long lowerBound, long upperBound, int numPartitions, java.util.Properties connectionProperties)
DataFrame
representing the database table accessible via JDBC URL
url named table. Partitions of the table will be retrieved in parallel based on the parameters
passed to this function.
Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.
You can find the JDBC-specific option and parameter documentation for reading tables via JDBC in Data Source Option in the version you use.
table
- Name of the table in the external database.columnName
- Alias of partitionColumn
option. Refer to partitionColumn
in
Data Source Option in the version you use.connectionProperties
- JDBC database connection arguments, a list of arbitrary string
tag/value. Normally at least a "user" and "password" property
should be included. "fetchsize" can be used to control the
number of rows per fetch and "queryTimeout" can be used to wait
for a Statement object to execute to the given number of seconds.url
- (undocumented)lowerBound
- (undocumented)upperBound
- (undocumented)numPartitions
- (undocumented)public Dataset<Row> jdbc(String url, String table, String[] predicates, java.util.Properties connectionProperties)
DataFrame
representing the database table accessible via JDBC URL
url named table using connection properties. The predicates
parameter gives a list
expressions suitable for inclusion in WHERE clauses; each one defines one partition
of the DataFrame
.
Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.
You can find the JDBC-specific option and parameter documentation for reading tables via JDBC in Data Source Option in the version you use.
table
- Name of the table in the external database.predicates
- Condition in the where clause for each partition.connectionProperties
- JDBC database connection arguments, a list of arbitrary string
tag/value. Normally at least a "user" and "password" property
should be included. "fetchsize" can be used to control the
number of rows per fetch.url
- (undocumented)public Dataset<Row> json(String... paths)
DataFrame
.
JSON Lines (newline-delimited JSON) is supported by
default. For JSON (one record per file), set the multiLine
option to true.
This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.
You can find the JSON-specific options for reading JSON files in Data Source Option in the version you use.
paths
- (undocumented)public Dataset<Row> json(String path)
DataFrame
.
See the documentation on the overloaded json()
method with varargs for more details.
path
- (undocumented)public Dataset<Row> json(scala.collection.Seq<String> paths)
DataFrame
.
JSON Lines (newline-delimited JSON) is supported by
default. For JSON (one record per file), set the multiLine
option to true.
This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.
You can find the JSON-specific options for reading JSON files in Data Source Option in the version you use.
paths
- (undocumented)public Dataset<Row> json(JavaRDD<String> jsonRDD)
JavaRDD[String]
storing JSON objects (JSON
Lines text format or newline-delimited JSON) and returns the result as
a DataFrame
.
Unless the schema is specified using schema
function, this function goes through the
input once to determine the input schema.
jsonRDD
- input RDD with one JSON object per recordpublic Dataset<Row> json(RDD<String> jsonRDD)
RDD[String]
storing JSON objects (JSON Lines
text format or newline-delimited JSON) and returns the result as a DataFrame
.
Unless the schema is specified using schema
function, this function goes through the
input once to determine the input schema.
jsonRDD
- input RDD with one JSON object per recordpublic Dataset<Row> json(Dataset<String> jsonDataset)
Dataset[String]
storing JSON objects (JSON Lines
text format or newline-delimited JSON) and returns the result as a DataFrame
.
Unless the schema is specified using schema
function, this function goes through the
input once to determine the input schema.
jsonDataset
- input Dataset with one JSON object per recordpublic Dataset<Row> load(String... paths)
DataFrame
, for data sources that support multiple paths.
Only works if the source is a HadoopFsRelationProvider.
paths
- (undocumented)public Dataset<Row> load()
DataFrame
, for data sources that don't require a path (e.g. external
key-value stores).
public Dataset<Row> load(String path)
DataFrame
, for data sources that require a path (e.g. data backed by
a local or distributed file system).
path
- (undocumented)public Dataset<Row> load(scala.collection.Seq<String> paths)
DataFrame
, for data sources that support multiple paths.
Only works if the source is a HadoopFsRelationProvider.
paths
- (undocumented)public DataFrameReader option(String key, String value)
All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.
key
- (undocumented)value
- (undocumented)public DataFrameReader option(String key, boolean value)
All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.
key
- (undocumented)value
- (undocumented)public DataFrameReader option(String key, long value)
All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.
key
- (undocumented)value
- (undocumented)public DataFrameReader option(String key, double value)
All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.
key
- (undocumented)value
- (undocumented)public DataFrameReader options(scala.collection.Map<String,String> options)
All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.
options
- (undocumented)public DataFrameReader options(java.util.Map<String,String> options)
All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.
options
- (undocumented)public Dataset<Row> orc(String... paths)
DataFrame
.
ORC-specific option(s) for reading ORC files can be found in Data Source Option in the version you use.
paths
- input pathspublic Dataset<Row> orc(String path)
DataFrame
.
path
- input pathpublic Dataset<Row> orc(scala.collection.Seq<String> paths)
DataFrame
.
ORC-specific option(s) for reading ORC files can be found in Data Source Option in the version you use.
paths
- input pathspublic Dataset<Row> parquet(String... paths)
DataFrame
.
Parquet-specific option(s) for reading Parquet files can be found in Data Source Option in the version you use.
paths
- (undocumented)public Dataset<Row> parquet(String path)
DataFrame
. See the documentation
on the other overloaded parquet()
method for more details.
path
- (undocumented)public Dataset<Row> parquet(scala.collection.Seq<String> paths)
DataFrame
.
Parquet-specific option(s) for reading Parquet files can be found in Data Source Option in the version you use.
paths
- (undocumented)public DataFrameReader schema(StructType schema)
schema
- (undocumented)public DataFrameReader schema(String schemaString)
spark.read.schema("a INT, b STRING, c DOUBLE").csv("test.csv")
schemaString
- (undocumented)public Dataset<Row> table(String tableName)
DataFrame
. If it's a table, it must support batch
reading and the returned DataFrame is the batch scan query plan of this table. If it's a view,
the returned DataFrame is simply the query plan of the view, which can either be a batch or
streaming query plan.
tableName
- is either a qualified or unqualified name that designates a table or view.
If a database is specified, it identifies the table/view from the database.
Otherwise, it first attempts to find a temporary view with the given name
and then match the table/view from the current database.
Note that, the global temporary view database is also valid here.public Dataset<Row> text(String... paths)
DataFrame
whose schema starts with a string column named
"value", and followed by partitioned columns if there are any.
The text files must be encoded as UTF-8.
By default, each line in the text files is a new row in the resulting DataFrame. For example:
// Scala:
spark.read.text("/path/to/spark/README.md")
// Java:
spark.read().text("/path/to/spark/README.md")
You can find the text-specific options for reading text files in Data Source Option in the version you use.
paths
- input pathspublic Dataset<Row> text(String path)
DataFrame
whose schema starts with a string column named
"value", and followed by partitioned columns if there are any. See the documentation on
the other overloaded text()
method for more details.
path
- (undocumented)public Dataset<Row> text(scala.collection.Seq<String> paths)
DataFrame
whose schema starts with a string column named
"value", and followed by partitioned columns if there are any.
The text files must be encoded as UTF-8.
By default, each line in the text files is a new row in the resulting DataFrame. For example:
// Scala:
spark.read.text("/path/to/spark/README.md")
// Java:
spark.read().text("/path/to/spark/README.md")
You can find the text-specific options for reading text files in Data Source Option in the version you use.
paths
- input pathspublic Dataset<String> textFile(String... paths)
Dataset
of String. The underlying schema of the Dataset
contains a single string column named "value".
The text files must be encoded as UTF-8.
If the directory structure of the text files contains partitioning information, those are
ignored in the resulting Dataset. To include partitioning information as columns, use text
.
By default, each line in the text files is a new row in the resulting DataFrame. For example:
// Scala:
spark.read.textFile("/path/to/spark/README.md")
// Java:
spark.read().textFile("/path/to/spark/README.md")
You can set the text-specific options as specified in DataFrameReader.text
.
paths
- input pathpublic Dataset<String> textFile(String path)
Dataset
of String. See the documentation on the
other overloaded textFile()
method for more details.path
- (undocumented)public Dataset<String> textFile(scala.collection.Seq<String> paths)
Dataset
of String. The underlying schema of the Dataset
contains a single string column named "value".
The text files must be encoded as UTF-8.
If the directory structure of the text files contains partitioning information, those are
ignored in the resulting Dataset. To include partitioning information as columns, use text
.
By default, each line in the text files is a new row in the resulting DataFrame. For example:
// Scala:
spark.read.textFile("/path/to/spark/README.md")
// Java:
spark.read().textFile("/path/to/spark/README.md")
You can set the text-specific options as specified in DataFrameReader.text
.
paths
- input path