org.apache.spark.sql

DataFrameReader

class DataFrameReader extends Logging

:: Experimental :: Interface used to load a DataFrame from external storage systems (e.g. file systems, key-value stores, etc). Use SQLContext.read to access this.

Annotations
@Experimental()
Source
DataFrameReader.scala
Since

1.4.0

Linear Supertypes
Logging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. DataFrameReader
  2. Logging
  3. AnyRef
  4. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. def format(source: String): DataFrameReader

    Specifies the input data source format.

    Specifies the input data source format.

    Since

    1.4.0

  12. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  13. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  14. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  15. def isTraceEnabled(): Boolean

    Attributes
    protected
    Definition Classes
    Logging
  16. def jdbc(url: String, table: String, predicates: Array[String], connectionProperties: Properties): DataFrame

    Construct a DataFrame representing the database table accessible via JDBC URL url named table using connection properties.

    Construct a DataFrame representing the database table accessible via JDBC URL url named table using connection properties. The predicates parameter gives a list expressions suitable for inclusion in WHERE clauses; each one defines one partition of the DataFrame.

    Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.

    url

    JDBC database url of the form jdbc:subprotocol:subname

    table

    Name of the table in the external database.

    predicates

    Condition in the where clause for each partition.

    connectionProperties

    JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included.

    Since

    1.4.0

  17. def jdbc(url: String, table: String, columnName: String, lowerBound: Long, upperBound: Long, numPartitions: Int, connectionProperties: Properties): DataFrame

    Construct a DataFrame representing the database table accessible via JDBC URL url named table.

    Construct a DataFrame representing the database table accessible via JDBC URL url named table. Partitions of the table will be retrieved in parallel based on the parameters passed to this function.

    Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.

    url

    JDBC database url of the form jdbc:subprotocol:subname

    table

    Name of the table in the external database.

    columnName

    the name of a column of integral type that will be used for partitioning.

    lowerBound

    the minimum value of columnName used to decide partition stride

    upperBound

    the maximum value of columnName used to decide partition stride

    numPartitions

    the number of partitions. the range minValue-maxValue will be split evenly into this many partitions

    connectionProperties

    JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included.

    Since

    1.4.0

  18. def jdbc(url: String, table: String, properties: Properties): DataFrame

    Construct a DataFrame representing the database table accessible via JDBC URL url named table and connection properties.

    Construct a DataFrame representing the database table accessible via JDBC URL url named table and connection properties.

    Since

    1.4.0

  19. def json(jsonRDD: RDD[String]): DataFrame

    Loads an RDD[String] storing JSON objects (one object per record) and returns the result as a DataFrame.

    Loads an RDD[String] storing JSON objects (one object per record) and returns the result as a DataFrame.

    Unless the schema is specified using schema function, this function goes through the input once to determine the input schema.

    jsonRDD

    input RDD with one JSON object per record

    Since

    1.4.0

  20. def json(jsonRDD: JavaRDD[String]): DataFrame

    Loads an JavaRDD[String] storing JSON objects (one object per record) and returns the result as a DataFrame.

    Loads an JavaRDD[String] storing JSON objects (one object per record) and returns the result as a DataFrame.

    Unless the schema is specified using schema function, this function goes through the input once to determine the input schema.

    jsonRDD

    input RDD with one JSON object per record

    Since

    1.4.0

  21. def json(paths: String*): DataFrame

    Loads a JSON file (one object per line) and returns the result as a DataFrame.

    Loads a JSON file (one object per line) and returns the result as a DataFrame.

    This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.

    You can set the following JSON-specific options to deal with non-standard JSON files:

    • primitivesAsString (default false): infers all primitive values as a string type
    • allowComments (default false): ignores Java/C++ style comment in JSON records
    • allowUnquotedFieldNames (default false): allows unquoted JSON field names
    • allowSingleQuotes (default true): allows single quotes in addition to double quotes
    • allowNumericLeadingZeros (default false): allows leading zeros in numbers (e.g. 00012)
    Since

    1.6.0

  22. def json(path: String): DataFrame

    Loads a JSON file (one object per line) and returns the result as a DataFrame.

    Loads a JSON file (one object per line) and returns the result as a DataFrame.

    This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.

    You can set the following JSON-specific options to deal with non-standard JSON files:

    • primitivesAsString (default false): infers all primitive values as a string type
    • allowComments (default false): ignores Java/C++ style comment in JSON records
    • allowUnquotedFieldNames (default false): allows unquoted JSON field names
    • allowSingleQuotes (default true): allows single quotes in addition to double quotes
    • allowNumericLeadingZeros (default false): allows leading zeros in numbers (e.g. 00012)
    Since

    1.4.0

  23. def load(paths: String*): DataFrame

    Loads input in as a DataFrame, for data sources that support multiple paths.

    Loads input in as a DataFrame, for data sources that support multiple paths. Only works if the source is a HadoopFsRelationProvider.

    Annotations
    @varargs()
    Since

    1.6.0

  24. def load(): DataFrame

    Loads input in as a DataFrame, for data sources that don't require a path (e.

    Loads input in as a DataFrame, for data sources that don't require a path (e.g. external key-value stores).

    Since

    1.4.0

  25. def load(path: String): DataFrame

    Loads input in as a DataFrame, for data sources that require a path (e.

    Loads input in as a DataFrame, for data sources that require a path (e.g. data backed by a local or distributed file system).

    Since

    1.4.0

  26. def log: Logger

    Attributes
    protected
    Definition Classes
    Logging
  27. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  28. def logDebug(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  29. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  30. def logError(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  31. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  32. def logInfo(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  33. def logName: String

    Attributes
    protected
    Definition Classes
    Logging
  34. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  35. def logTrace(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  36. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  37. def logWarning(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  38. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  39. final def notify(): Unit

    Definition Classes
    AnyRef
  40. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  41. def option(key: String, value: String): DataFrameReader

    Adds an input option for the underlying data source.

    Adds an input option for the underlying data source.

    Since

    1.4.0

  42. def options(options: Map[String, String]): DataFrameReader

    Adds input options for the underlying data source.

    Adds input options for the underlying data source.

    Since

    1.4.0

  43. def options(options: Map[String, String]): DataFrameReader

    (Scala-specific) Adds input options for the underlying data source.

    (Scala-specific) Adds input options for the underlying data source.

    Since

    1.4.0

  44. def orc(path: String): DataFrame

    Loads an ORC file and returns the result as a DataFrame.

    Loads an ORC file and returns the result as a DataFrame.

    path

    input path

    Since

    1.5.0

    Note

    Currently, this method can only be used together with HiveContext.

  45. def parquet(paths: String*): DataFrame

    Loads a Parquet file, returning the result as a DataFrame.

    Loads a Parquet file, returning the result as a DataFrame. This function returns an empty DataFrame if no paths are passed in.

    Annotations
    @varargs()
    Since

    1.4.0

  46. def schema(schema: StructType): DataFrameReader

    Specifies the input schema.

    Specifies the input schema. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading.

    Since

    1.4.0

  47. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  48. def table(tableName: String): DataFrame

    Returns the specified table as a DataFrame.

    Returns the specified table as a DataFrame.

    Since

    1.4.0

  49. def text(paths: String*): DataFrame

    Loads a text file and returns a DataFrame with a single string column named "value".

    Loads a text file and returns a DataFrame with a single string column named "value". Each line in the text file is a new row in the resulting DataFrame. For example:

    // Scala:
    sqlContext.read.text("/path/to/spark/README.md")
    
    // Java:
    sqlContext.read().text("/path/to/spark/README.md")
    paths

    input path

    Annotations
    @varargs()
    Since

    1.6.0

  50. def toString(): String

    Definition Classes
    AnyRef → Any
  51. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  52. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  53. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped