Class DataFrameReader

Object
org.apache.spark.sql.api.DataFrameReader<Dataset>
org.apache.spark.sql.DataFrameReader

public class DataFrameReader extends DataFrameReader<Dataset>
Interface used to load a Dataset from external storage systems (e.g. file systems, key-value stores, etc). Use SparkSession.read to access this.

Since:
1.4.0
  • Method Details

    • csv

      public Dataset<Row> csv(String... paths)
      Description copied from class: DataFrameReader
      Loads CSV files and returns the result as a DataFrame.

      This function will go through the input once to determine the input schema if inferSchema is enabled. To avoid going through the entire data once, disable inferSchema option or specify the schema explicitly using schema.

      You can find the CSV-specific options for reading CSV files in Data Source Option in the version you use.

      Overrides:
      csv in class DataFrameReader<Dataset>
      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • csv

      public Dataset<Row> csv(String path)
      Description copied from class: DataFrameReader
      Loads a CSV file and returns the result as a DataFrame. See the documentation on the other overloaded csv() method for more details.

      Overrides:
      csv in class DataFrameReader<Dataset>
      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • csv

      public Dataset<Row> csv(Dataset<String> csvDataset)
      Inheritdoc:
    • csv

      public Dataset<Row> csv(scala.collection.immutable.Seq<String> paths)
      Description copied from class: DataFrameReader
      Loads CSV files and returns the result as a DataFrame.

      This function will go through the input once to determine the input schema if inferSchema is enabled. To avoid going through the entire data once, disable inferSchema option or specify the schema explicitly using schema.

      You can find the CSV-specific options for reading CSV files in Data Source Option in the version you use.

      Overrides:
      csv in class DataFrameReader<Dataset>
      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • format

      public DataFrameReader format(String source)
      Description copied from class: DataFrameReader
      Specifies the input data source format.

      Overrides:
      format in class DataFrameReader<Dataset>
      Parameters:
      source - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • jdbc

      public Dataset<Row> jdbc(String url, String table, Properties properties)
      Description copied from class: DataFrameReader
      Construct a DataFrame representing the database table accessible via JDBC URL url named table and connection properties.

      You can find the JDBC-specific option and parameter documentation for reading tables via JDBC in Data Source Option in the version you use.

      Overrides:
      jdbc in class DataFrameReader<Dataset>
      Parameters:
      url - (undocumented)
      table - (undocumented)
      properties - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • jdbc

      public Dataset<Row> jdbc(String url, String table, String columnName, long lowerBound, long upperBound, int numPartitions, Properties connectionProperties)
      Description copied from class: DataFrameReader
      Construct a DataFrame representing the database table accessible via JDBC URL url named table. Partitions of the table will be retrieved in parallel based on the parameters passed to this function.

      Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.

      You can find the JDBC-specific option and parameter documentation for reading tables via JDBC in Data Source Option in the version you use.

      Overrides:
      jdbc in class DataFrameReader<Dataset>
      Parameters:
      url - (undocumented)
      table - Name of the table in the external database.
      columnName - Alias of partitionColumn option. Refer to partitionColumn in Data Source Option in the version you use.
      lowerBound - (undocumented)
      upperBound - (undocumented)
      numPartitions - (undocumented)
      connectionProperties - JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included. "fetchsize" can be used to control the number of rows per fetch and "queryTimeout" can be used to wait for a Statement object to execute to the given number of seconds.
      Returns:
      (undocumented)
      Inheritdoc:
    • jdbc

      public Dataset<Row> jdbc(String url, String table, String[] predicates, Properties connectionProperties)
      Description copied from class: DataFrameReader
      Construct a DataFrame representing the database table accessible via JDBC URL url named table using connection properties. The predicates parameter gives a list expressions suitable for inclusion in WHERE clauses; each one defines one partition of the DataFrame.

      Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.

      You can find the JDBC-specific option and parameter documentation for reading tables via JDBC in Data Source Option in the version you use.

      Specified by:
      jdbc in class DataFrameReader<Dataset>
      Parameters:
      url - (undocumented)
      table - Name of the table in the external database.
      predicates - Condition in the where clause for each partition.
      connectionProperties - JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included. "fetchsize" can be used to control the number of rows per fetch.
      Returns:
      (undocumented)
      Inheritdoc:
    • json

      public Dataset<Row> json(String... paths)
      Description copied from class: DataFrameReader
      Loads JSON files and returns the results as a DataFrame.

      JSON Lines (newline-delimited JSON) is supported by default. For JSON (one record per file), set the multiLine option to true.

      This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.

      You can find the JSON-specific options for reading JSON files in Data Source Option in the version you use.

      Overrides:
      json in class DataFrameReader<Dataset>
      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • json

      public Dataset<Row> json(String path)
      Description copied from class: DataFrameReader
      Loads a JSON file and returns the results as a DataFrame.

      See the documentation on the overloaded json() method with varargs for more details.

      Overrides:
      json in class DataFrameReader<Dataset>
      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • json

      public Dataset<Row> json(scala.collection.immutable.Seq<String> paths)
      Description copied from class: DataFrameReader
      Loads JSON files and returns the results as a DataFrame.

      JSON Lines (newline-delimited JSON) is supported by default. For JSON (one record per file), set the multiLine option to true.

      This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.

      You can find the JSON-specific options for reading JSON files in Data Source Option in the version you use.

      Overrides:
      json in class DataFrameReader<Dataset>
      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • json

      public Dataset<Row> json(JavaRDD<String> jsonRDD)
      Deprecated.
      Use json(Dataset[String]) instead. Since 2.2.0.
      Loads a JavaRDD[String] storing JSON objects (JSON Lines text format or newline-delimited JSON) and returns the result as a DataFrame.

      Unless the schema is specified using schema function, this function goes through the input once to determine the input schema.

      Parameters:
      jsonRDD - input RDD with one JSON object per record
      Returns:
      (undocumented)
      Since:
      1.4.0
    • json

      public Dataset<Row> json(RDD<String> jsonRDD)
      Deprecated.
      Use json(Dataset[String]) instead. Since 2.2.0.
      Loads an RDD[String] storing JSON objects (JSON Lines text format or newline-delimited JSON) and returns the result as a DataFrame.

      Unless the schema is specified using schema function, this function goes through the input once to determine the input schema.

      Parameters:
      jsonRDD - input RDD with one JSON object per record
      Returns:
      (undocumented)
      Since:
      1.4.0
    • json

      public Dataset<Row> json(Dataset<String> jsonDataset)
      Inheritdoc:
    • load

      public Dataset<Row> load(String... paths)
      Description copied from class: DataFrameReader
      Loads input in as a DataFrame, for data sources that support multiple paths. Only works if the source is a HadoopFsRelationProvider.

      Overrides:
      load in class DataFrameReader<Dataset>
      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • load

      public Dataset<Row> load()
      Description copied from class: DataFrameReader
      Loads input in as a DataFrame, for data sources that don't require a path (e.g. external key-value stores).

      Specified by:
      load in class DataFrameReader<Dataset>
      Returns:
      (undocumented)
      Inheritdoc:
    • load

      public Dataset<Row> load(String path)
      Description copied from class: DataFrameReader
      Loads input in as a DataFrame, for data sources that require a path (e.g. data backed by a local or distributed file system).

      Specified by:
      load in class DataFrameReader<Dataset>
      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • load

      public Dataset<Row> load(scala.collection.immutable.Seq<String> paths)
      Description copied from class: DataFrameReader
      Loads input in as a DataFrame, for data sources that support multiple paths. Only works if the source is a HadoopFsRelationProvider.

      Specified by:
      load in class DataFrameReader<Dataset>
      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • option

      public DataFrameReader option(String key, String value)
      Description copied from class: DataFrameReader
      Adds an input option for the underlying data source.

      All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.

      Overrides:
      option in class DataFrameReader<Dataset>
      Parameters:
      key - (undocumented)
      value - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • option

      public DataFrameReader option(String key, boolean value)
      Description copied from class: DataFrameReader
      Adds an input option for the underlying data source.

      All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.

      Overrides:
      option in class DataFrameReader<Dataset>
      Parameters:
      key - (undocumented)
      value - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • option

      public DataFrameReader option(String key, long value)
      Description copied from class: DataFrameReader
      Adds an input option for the underlying data source.

      All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.

      Overrides:
      option in class DataFrameReader<Dataset>
      Parameters:
      key - (undocumented)
      value - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • option

      public DataFrameReader option(String key, double value)
      Description copied from class: DataFrameReader
      Adds an input option for the underlying data source.

      All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.

      Overrides:
      option in class DataFrameReader<Dataset>
      Parameters:
      key - (undocumented)
      value - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • options

      public DataFrameReader options(scala.collection.Map<String,String> options)
      Description copied from class: DataFrameReader
      (Scala-specific) Adds input options for the underlying data source.

      All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.

      Overrides:
      options in class DataFrameReader<Dataset>
      Parameters:
      options - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • options

      public DataFrameReader options(Map<String,String> options)
      Description copied from class: DataFrameReader
      Adds input options for the underlying data source.

      All options are maintained in a case-insensitive way in terms of key names. If a new option has the same key case-insensitively, it will override the existing option.

      Overrides:
      options in class DataFrameReader<Dataset>
      Parameters:
      options - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • orc

      public Dataset<Row> orc(String... paths)
      Description copied from class: DataFrameReader
      Loads ORC files and returns the result as a DataFrame.

      ORC-specific option(s) for reading ORC files can be found in Data Source Option in the version you use.

      Overrides:
      orc in class DataFrameReader<Dataset>
      Parameters:
      paths - input paths
      Returns:
      (undocumented)
      Inheritdoc:
    • orc

      public Dataset<Row> orc(String path)
      Description copied from class: DataFrameReader
      Loads an ORC file and returns the result as a DataFrame.

      Overrides:
      orc in class DataFrameReader<Dataset>
      Parameters:
      path - input path
      Returns:
      (undocumented)
      Inheritdoc:
    • orc

      public Dataset<Row> orc(scala.collection.immutable.Seq<String> paths)
      Description copied from class: DataFrameReader
      Loads ORC files and returns the result as a DataFrame.

      ORC-specific option(s) for reading ORC files can be found in Data Source Option in the version you use.

      Overrides:
      orc in class DataFrameReader<Dataset>
      Parameters:
      paths - input paths
      Returns:
      (undocumented)
      Inheritdoc:
    • parquet

      public Dataset<Row> parquet(String... paths)
      Description copied from class: DataFrameReader
      Loads a Parquet file, returning the result as a DataFrame.

      Parquet-specific option(s) for reading Parquet files can be found in Data Source Option in the version you use.

      Overrides:
      parquet in class DataFrameReader<Dataset>
      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • parquet

      public Dataset<Row> parquet(String path)
      Description copied from class: DataFrameReader
      Loads a Parquet file, returning the result as a DataFrame. See the documentation on the other overloaded parquet() method for more details.

      Overrides:
      parquet in class DataFrameReader<Dataset>
      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • parquet

      public Dataset<Row> parquet(scala.collection.immutable.Seq<String> paths)
      Description copied from class: DataFrameReader
      Loads a Parquet file, returning the result as a DataFrame.

      Parquet-specific option(s) for reading Parquet files can be found in Data Source Option in the version you use.

      Overrides:
      parquet in class DataFrameReader<Dataset>
      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • schema

      public DataFrameReader schema(StructType schema)
      Description copied from class: DataFrameReader
      Specifies the input schema. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading.

      Overrides:
      schema in class DataFrameReader<Dataset>
      Parameters:
      schema - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • schema

      public DataFrameReader schema(String schemaString)
      Description copied from class: DataFrameReader
      Specifies the schema by using the input DDL-formatted string. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading.

      
         spark.read.schema("a INT, b STRING, c DOUBLE").csv("test.csv")
       

      Overrides:
      schema in class DataFrameReader<Dataset>
      Parameters:
      schemaString - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • table

      public Dataset<Row> table(String tableName)
      Description copied from class: DataFrameReader
      Returns the specified table/view as a DataFrame. If it's a table, it must support batch reading and the returned DataFrame is the batch scan query plan of this table. If it's a view, the returned DataFrame is simply the query plan of the view, which can either be a batch or streaming query plan.

      Specified by:
      table in class DataFrameReader<Dataset>
      Parameters:
      tableName - is either a qualified or unqualified name that designates a table or view. If a database is specified, it identifies the table/view from the database. Otherwise, it first attempts to find a temporary view with the given name and then match the table/view from the current database. Note that, the global temporary view database is also valid here.
      Returns:
      (undocumented)
      Inheritdoc:
    • text

      public Dataset<Row> text(String... paths)
      Description copied from class: DataFrameReader
      Loads text files and returns a DataFrame whose schema starts with a string column named "value", and followed by partitioned columns if there are any. The text files must be encoded as UTF-8.

      By default, each line in the text files is a new row in the resulting DataFrame. For example:

      
         // Scala:
         spark.read.text("/path/to/spark/README.md")
      
         // Java:
         spark.read().text("/path/to/spark/README.md")
       

      You can find the text-specific options for reading text files in Data Source Option in the version you use.

      Overrides:
      text in class DataFrameReader<Dataset>
      Parameters:
      paths - input paths
      Returns:
      (undocumented)
      Inheritdoc:
    • text

      public Dataset<Row> text(String path)
      Description copied from class: DataFrameReader
      Loads text files and returns a DataFrame whose schema starts with a string column named "value", and followed by partitioned columns if there are any. See the documentation on the other overloaded text() method for more details.

      Overrides:
      text in class DataFrameReader<Dataset>
      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • text

      public Dataset<Row> text(scala.collection.immutable.Seq<String> paths)
      Description copied from class: DataFrameReader
      Loads text files and returns a DataFrame whose schema starts with a string column named "value", and followed by partitioned columns if there are any. The text files must be encoded as UTF-8.

      By default, each line in the text files is a new row in the resulting DataFrame. For example:

      
         // Scala:
         spark.read.text("/path/to/spark/README.md")
      
         // Java:
         spark.read().text("/path/to/spark/README.md")
       

      You can find the text-specific options for reading text files in Data Source Option in the version you use.

      Overrides:
      text in class DataFrameReader<Dataset>
      Parameters:
      paths - input paths
      Returns:
      (undocumented)
      Inheritdoc:
    • textFile

      public Dataset<String> textFile(String... paths)
      Description copied from class: DataFrameReader
      Loads text files and returns a Dataset of String. The underlying schema of the Dataset contains a single string column named "value". The text files must be encoded as UTF-8.

      If the directory structure of the text files contains partitioning information, those are ignored in the resulting Dataset. To include partitioning information as columns, use text.

      By default, each line in the text files is a new row in the resulting DataFrame. For example:

      
         // Scala:
         spark.read.textFile("/path/to/spark/README.md")
      
         // Java:
         spark.read().textFile("/path/to/spark/README.md")
       

      You can set the text-specific options as specified in DataFrameReader.text.

      Overrides:
      textFile in class DataFrameReader<Dataset>
      Parameters:
      paths - input path
      Returns:
      (undocumented)
      Inheritdoc:
    • textFile

      public Dataset<String> textFile(String path)
      Description copied from class: DataFrameReader
      Loads text files and returns a Dataset of String. See the documentation on the other overloaded textFile() method for more details.
      Overrides:
      textFile in class DataFrameReader<Dataset>
      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • textFile

      public Dataset<String> textFile(scala.collection.immutable.Seq<String> paths)
      Description copied from class: DataFrameReader
      Loads text files and returns a Dataset of String. The underlying schema of the Dataset contains a single string column named "value". The text files must be encoded as UTF-8.

      If the directory structure of the text files contains partitioning information, those are ignored in the resulting Dataset. To include partitioning information as columns, use text.

      By default, each line in the text files is a new row in the resulting DataFrame. For example:

      
         // Scala:
         spark.read.textFile("/path/to/spark/README.md")
      
         // Java:
         spark.read().textFile("/path/to/spark/README.md")
       

      You can set the text-specific options as specified in DataFrameReader.text.

      Overrides:
      textFile in class DataFrameReader<Dataset>
      Parameters:
      paths - input path
      Returns:
      (undocumented)
      Inheritdoc:
    • xml

      public Dataset<Row> xml(String... paths)
      Description copied from class: DataFrameReader
      Loads XML files and returns the result as a DataFrame.

      This function will go through the input once to determine the input schema if inferSchema is enabled. To avoid going through the entire data once, disable inferSchema option or specify the schema explicitly using schema.

      You can find the XML-specific options for reading XML files in Data Source Option in the version you use.

      Overrides:
      xml in class DataFrameReader<Dataset>
      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • xml

      public Dataset<Row> xml(String path)
      Description copied from class: DataFrameReader
      Loads a XML file and returns the result as a DataFrame. See the documentation on the other overloaded xml() method for more details.

      Overrides:
      xml in class DataFrameReader<Dataset>
      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • xml

      public Dataset<Row> xml(scala.collection.immutable.Seq<String> paths)
      Description copied from class: DataFrameReader
      Loads XML files and returns the result as a DataFrame.

      This function will go through the input once to determine the input schema if inferSchema is enabled. To avoid going through the entire data once, disable inferSchema option or specify the schema explicitly using schema.

      You can find the XML-specific options for reading XML files in Data Source Option in the version you use.

      Overrides:
      xml in class DataFrameReader<Dataset>
      Parameters:
      paths - (undocumented)
      Returns:
      (undocumented)
      Inheritdoc:
    • xml

      public Dataset<Row> xml(Dataset<String> xmlDataset)
      Inheritdoc: