Packages

class SparkSession extends sql.api.SparkSession[Dataset] with Logging

The entry point to programming Spark with the Dataset and DataFrame API.

In environments that this has been created upfront (e.g. REPL, notebooks), use the builder to get an existing session:

SparkSession.builder().getOrCreate()

The builder can also be used to create a new session:

SparkSession.builder
  .master("local")
  .appName("Word Count")
  .config("spark.some.config.option", "some-value")
  .getOrCreate()
Self Type
SparkSession
Annotations
@Stable()
Source
SparkSession.scala
Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. SparkSession
  2. Logging
  3. SparkSession
  4. Closeable
  5. AutoCloseable
  6. Serializable
  7. AnyRef
  8. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Type Members

  1. implicit class LogStringContext extends AnyRef
    Definition Classes
    Logging

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def addArtifact(source: String, target: String): Unit

    Add a single artifact to the session while preserving the directory structure specified by target under the session's working directory of that particular file extension.

    Add a single artifact to the session while preserving the directory structure specified by target under the session's working directory of that particular file extension.

    Supported target file extensions are .jar and .class.

    Example

    addArtifact("/Users/dummyUser/files/foo/bar.class", "foo/bar.class")
    addArtifact("/Users/dummyUser/files/flat.class", "flat.class")
    // Directory structure of the session's working directory for class files would look like:
    // ${WORKING_DIR_FOR_CLASS_FILES}/flat.class
    // ${WORKING_DIR_FOR_CLASS_FILES}/foo/bar.class
    Definition Classes
    SparkSessionSparkSession
    Annotations
    @Experimental()
  5. def addArtifact(bytes: Array[Byte], target: String): Unit

    Add a single in-memory artifact to the session while preserving the directory structure specified by target under the session's working directory of that particular file extension.

    Add a single in-memory artifact to the session while preserving the directory structure specified by target under the session's working directory of that particular file extension.

    Supported target file extensions are .jar and .class.

    Example

    addArtifact(bytesBar, "foo/bar.class")
    addArtifact(bytesFlat, "flat.class")
    // Directory structure of the session's working directory for class files would look like:
    // ${WORKING_DIR_FOR_CLASS_FILES}/flat.class
    // ${WORKING_DIR_FOR_CLASS_FILES}/foo/bar.class
    Definition Classes
    SparkSessionSparkSession
    Annotations
    @Experimental()
  6. def addArtifact(uri: URI): Unit

    Add a single artifact to the current session.

    Add a single artifact to the current session.

    Currently it supports local files with extensions .jar and .class and Apache Ivy URIs.

    Definition Classes
    SparkSessionSparkSession
    Annotations
    @Experimental()
  7. def addArtifact(path: String): Unit

    Add a single artifact to the current session.

    Add a single artifact to the current session.

    Currently only local files with extensions .jar and .class are supported.

    Definition Classes
    SparkSessionSparkSession
    Annotations
    @Experimental()
  8. def addArtifacts(uri: URI*): Unit

    Add one or more artifacts to the session.

    Add one or more artifacts to the session.

    Currently it supports local files with extensions .jar and .class and Apache Ivy URIs

    Definition Classes
    SparkSessionSparkSession
    Annotations
    @Experimental() @varargs()
  9. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  10. def baseRelationToDataFrame(baseRelation: BaseRelation): DataFrame

    Convert a BaseRelation created for external data sources into a DataFrame.

    Convert a BaseRelation created for external data sources into a DataFrame.

    Since

    2.0.0

  11. lazy val catalog: Catalog

    Interface through which the user may create, drop, alter or query underlying databases, tables, functions etc.

    Interface through which the user may create, drop, alter or query underlying databases, tables, functions etc.

    Definition Classes
    SparkSessionSparkSession
    Annotations
    @transient()
  12. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
  13. def close(): Unit

    Stop the underlying SparkContext.

    Stop the underlying SparkContext.

    Definition Classes
    SparkSession → Closeable → AutoCloseable
    Since

    2.1.0

  14. lazy val conf: RuntimeConfig

    Runtime configuration interface for Spark.

    Runtime configuration interface for Spark.

    This is the interface through which the user can get and set all Spark and Hadoop configurations that are relevant to Spark SQL. When getting the value of a config, this defaults to the value set in the underlying SparkContext, if any.

    Annotations
    @transient()
    Since

    2.0.0

  15. def createDataFrame(data: List[_], beanClass: Class[_]): DataFrame

    Applies a schema to a List of Java Beans.

    Applies a schema to a List of Java Beans.

    WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.

    Definition Classes
    SparkSessionSparkSession
  16. def createDataFrame(rdd: JavaRDD[_], beanClass: Class[_]): DataFrame

    Applies a schema to an RDD of Java Beans.

    Applies a schema to an RDD of Java Beans.

    WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.

    Since

    2.0.0

  17. def createDataFrame(rdd: RDD[_], beanClass: Class[_]): DataFrame

    Applies a schema to an RDD of Java Beans.

    Applies a schema to an RDD of Java Beans.

    WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.

    Since

    2.0.0

  18. def createDataFrame(rows: List[Row], schema: StructType): DataFrame

    :: DeveloperApi :: Creates a DataFrame from a java.util.List containing org.apache.spark.sql.Rows using the given schema.It is important to make sure that the structure of every org.apache.spark.sql.Row of the provided List matches the provided schema.

    :: DeveloperApi :: Creates a DataFrame from a java.util.List containing org.apache.spark.sql.Rows using the given schema.It is important to make sure that the structure of every org.apache.spark.sql.Row of the provided List matches the provided schema. Otherwise, there will be runtime exception.

    Definition Classes
    SparkSessionSparkSession
    Annotations
    @DeveloperApi()
  19. def createDataFrame(rowRDD: JavaRDD[Row], schema: StructType): DataFrame

    :: DeveloperApi :: Creates a DataFrame from a JavaRDD containing Rows using the given schema.

    :: DeveloperApi :: Creates a DataFrame from a JavaRDD containing Rows using the given schema. It is important to make sure that the structure of every Row of the provided RDD matches the provided schema. Otherwise, there will be runtime exception.

    Annotations
    @DeveloperApi()
    Since

    2.0.0

  20. def createDataFrame(rowRDD: RDD[Row], schema: StructType): DataFrame

    :: DeveloperApi :: Creates a DataFrame from an RDD containing Rows using the given schema.

    :: DeveloperApi :: Creates a DataFrame from an RDD containing Rows using the given schema. It is important to make sure that the structure of every Row of the provided RDD matches the provided schema. Otherwise, there will be runtime exception. Example:

    import org.apache.spark.sql._
    import org.apache.spark.sql.types._
    val sparkSession = new org.apache.spark.sql.SparkSession(sc)
    
    val schema =
      StructType(
        StructField("name", StringType, false) ::
        StructField("age", IntegerType, true) :: Nil)
    
    val people =
      sc.textFile("examples/src/main/resources/people.txt").map(
        _.split(",")).map(p => Row(p(0), p(1).trim.toInt))
    val dataFrame = sparkSession.createDataFrame(people, schema)
    dataFrame.printSchema
    // root
    // |-- name: string (nullable = false)
    // |-- age: integer (nullable = true)
    
    dataFrame.createOrReplaceTempView("people")
    sparkSession.sql("select name from people").collect.foreach(println)
    Annotations
    @DeveloperApi()
    Since

    2.0.0

  21. def createDataFrame[A <: Product](data: Seq[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame

    Creates a DataFrame from a local Seq of Product.

    Creates a DataFrame from a local Seq of Product.

    Definition Classes
    SparkSessionSparkSession
  22. def createDataFrame[A <: Product](rdd: RDD[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame

    Creates a DataFrame from an RDD of Product (e.g.

    Creates a DataFrame from an RDD of Product (e.g. case classes, tuples).

    Since

    2.0.0

  23. def createDataset[T](data: List[T])(implicit arg0: Encoder[T]): Dataset[T]

    Creates a Dataset from a java.util.List of a given type.

    Creates a Dataset from a java.util.List of a given type. This method requires an encoder (to convert a JVM object of type T to and from the internal Spark SQL representation) that is generally created automatically through implicits from a SparkSession, or can be created explicitly by calling static methods on Encoders.

    Java Example

    List<String> data = Arrays.asList("hello", "world");
    Dataset<String> ds = spark.createDataset(data, Encoders.STRING());
    Definition Classes
    SparkSessionSparkSession
  24. def createDataset[T](data: RDD[T])(implicit arg0: Encoder[T]): Dataset[T]

    Creates a Dataset from an RDD of a given type.

    Creates a Dataset from an RDD of a given type. This method requires an encoder (to convert a JVM object of type T to and from the internal Spark SQL representation) that is generally created automatically through implicits from a SparkSession, or can be created explicitly by calling static methods on Encoders.

    Since

    2.0.0

  25. def createDataset[T](data: Seq[T])(implicit arg0: Encoder[T]): Dataset[T]

    Creates a Dataset from a local Seq of data of a given type.

    Creates a Dataset from a local Seq of data of a given type. This method requires an encoder (to convert a JVM object of type T to and from the internal Spark SQL representation) that is generally created automatically through implicits from a SparkSession, or can be created explicitly by calling static methods on Encoders.

    Example

    import spark.implicits._
    case class Person(name: String, age: Long)
    val data = Seq(Person("Michael", 29), Person("Andy", 30), Person("Justin", 19))
    val ds = spark.createDataset(data)
    
    ds.show()
    // +-------+---+
    // |   name|age|
    // +-------+---+
    // |Michael| 29|
    // |   Andy| 30|
    // | Justin| 19|
    // +-------+---+
    Definition Classes
    SparkSessionSparkSession
  26. def dataSource: DataSourceRegistration

    A collection of methods for registering user-defined data sources.

    A collection of methods for registering user-defined data sources.

    Annotations
    @Experimental() @Unstable()
    Since

    4.0.0

  27. lazy val emptyDataFrame: DataFrame

    Returns a DataFrame with no rows or columns.

    Returns a DataFrame with no rows or columns.

    Definition Classes
    SparkSessionSparkSession
    Annotations
    @transient()
  28. def emptyDataset[T](implicit arg0: Encoder[T]): Dataset[T]

    Creates a new Dataset of type T containing zero elements.

    Creates a new Dataset of type T containing zero elements.

    Definition Classes
    SparkSessionSparkSession
  29. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  30. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  31. def executeCommand(runner: String, command: String, options: Map[String, String]): DataFrame

    Execute an arbitrary string command inside an external execution engine rather than Spark.

    Execute an arbitrary string command inside an external execution engine rather than Spark. This could be useful when user wants to execute some commands out of Spark. For example, executing custom DDL/DML command for JDBC, creating index for ElasticSearch, creating cores for Solr and so on.

    The command will be eagerly executed after this method is called and the returned DataFrame will contain the output of the command(if any).

    runner

    The class name of the runner that implements ExternalCommandRunner.

    command

    The target command to be executed

    options

    The options for the runner.

    Annotations
    @Unstable()
    Since

    3.0.0

  32. def experimental: ExperimentalMethods

    :: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.

    :: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.

    Annotations
    @Experimental() @Unstable()
    Since

    2.0.0

  33. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  34. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  35. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  36. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  37. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  38. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  39. def listenerManager: ExecutionListenerManager

    An interface to register custom org.apache.spark.sql.util.QueryExecutionListeners that listen for execution metrics.

    An interface to register custom org.apache.spark.sql.util.QueryExecutionListeners that listen for execution metrics.

    Since

    2.0.0

  40. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  41. def logDebug(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  42. def logDebug(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  43. def logDebug(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  44. def logDebug(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  45. def logError(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  46. def logError(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  47. def logError(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  48. def logError(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  49. def logInfo(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  50. def logInfo(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  51. def logInfo(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  52. def logInfo(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  53. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  54. def logTrace(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  55. def logTrace(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  56. def logTrace(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  57. def logTrace(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  58. def logWarning(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  59. def logWarning(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  60. def logWarning(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  61. def logWarning(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  62. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  63. def newSession(): SparkSession

    Start a new session with isolated SQL configurations, temporary tables, registered functions are isolated, but sharing the underlying SparkContext and cached data.

    Start a new session with isolated SQL configurations, temporary tables, registered functions are isolated, but sharing the underlying SparkContext and cached data.

    Definition Classes
    SparkSessionSparkSession
  64. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  65. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  66. def parseDataType(dataTypeString: String): DataType

    Parses the data type in our internal string representation.

    Parses the data type in our internal string representation. The data type string should have the same format as the one generated by toString in scala. It is only used by PySpark.

    Attributes
    protected[sql]
  67. def range(start: Long, end: Long, step: Long, numPartitions: Int): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value, with partition number specified.

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value, with partition number specified.

    Definition Classes
    SparkSessionSparkSession
  68. def range(start: Long, end: Long, step: Long): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value.

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value.

    Definition Classes
    SparkSessionSparkSession
  69. def range(start: Long, end: Long): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with step value 1.

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with step value 1.

    Definition Classes
    SparkSessionSparkSession
  70. def range(end: Long): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from 0 to end (exclusive) with step value 1.

    Creates a Dataset with a single LongType column named id, containing elements in a range from 0 to end (exclusive) with step value 1.

    Definition Classes
    SparkSessionSparkSession
  71. def read: DataFrameReader

    Returns a DataFrameReader that can be used to read non-streaming data in as a DataFrame.

    Returns a DataFrameReader that can be used to read non-streaming data in as a DataFrame.

    sparkSession.read.parquet("/path/to/file.parquet")
    sparkSession.read.schema(schema).json("/path/to/file.json")
    Definition Classes
    SparkSessionSparkSession
  72. def readStream: DataStreamReader

    Returns a DataStreamReader that can be used to read streaming data in as a DataFrame.

    Returns a DataStreamReader that can be used to read streaming data in as a DataFrame.

    sparkSession.readStream.parquet("/path/to/directory/of/parquet/files")
    sparkSession.readStream.schema(schema).json("/path/to/directory/of/json/files")
    Since

    2.0.0

  73. lazy val sessionState: SessionState

    State isolated across sessions, including SQL configurations, temporary tables, registered functions, and everything else that accepts a org.apache.spark.sql.internal.SQLConf.

    State isolated across sessions, including SQL configurations, temporary tables, registered functions, and everything else that accepts a org.apache.spark.sql.internal.SQLConf. If parentSessionState is not null, the SessionState will be a copy of the parent.

    This is internal to Spark and there is no guarantee on interface stability.

    Annotations
    @Unstable() @transient()
    Since

    2.2.0

  74. lazy val sharedState: SharedState

    State shared across sessions, including the SparkContext, cached data, listener, and a catalog that interacts with external systems.

    State shared across sessions, including the SparkContext, cached data, listener, and a catalog that interacts with external systems.

    This is internal to Spark and there is no guarantee on interface stability.

    Annotations
    @Unstable() @transient()
    Since

    2.2.0

  75. val sparkContext: SparkContext
  76. def sql(sqlText: String): DataFrame

    Executes a SQL query using Spark, returning the result as a DataFrame.

    Executes a SQL query using Spark, returning the result as a DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.

    Definition Classes
    SparkSessionSparkSession
  77. def sql(sqlText: String, args: Map[String, Any]): DataFrame

    Executes a SQL query substituting named parameters by the given arguments, returning the result as a DataFrame.

    Executes a SQL query substituting named parameters by the given arguments, returning the result as a DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.

    sqlText

    A SQL statement with named parameters to execute.

    args

    A map of parameter names to Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, map keys: "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value can be also a Column of a literal or collection constructor functions such as map(), array(), struct(), in that case it is taken as is.

    Definition Classes
    SparkSessionSparkSession
    Annotations
    @Experimental()
  78. def sql(sqlText: String, args: Map[String, Any]): DataFrame

    Executes a SQL query substituting named parameters by the given arguments, returning the result as a DataFrame.

    Executes a SQL query substituting named parameters by the given arguments, returning the result as a DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.

    sqlText

    A SQL statement with named parameters to execute.

    args

    A map of parameter names to Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, map keys: "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value can be also a Column of a literal or collection constructor functions such as map(), array(), struct(), in that case it is taken as is.

    Definition Classes
    SparkSessionSparkSession
    Annotations
    @Experimental()
  79. def sql(sqlText: String, args: Array[_]): DataFrame

    Executes a SQL query substituting positional parameters by the given arguments, returning the result as a DataFrame.

    Executes a SQL query substituting positional parameters by the given arguments, returning the result as a DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.

    sqlText

    A SQL statement with positional parameters to execute.

    args

    An array of Java/Scala objects that can be converted to SQL literal expressions. See <a href="https://spark.apache.org/docs/latest/sql-ref-datatypes.html"> Supported Data Types for supported value types in Scala/Java. For example, 1, "Steven", LocalDate.of(2023, 4, 2). A value can be also a Column of a literal or collection constructor functions such as map(), array(), struct(), in that case it is taken as is.

    Definition Classes
    SparkSessionSparkSession
    Annotations
    @Experimental()
  80. val sqlContext: SQLContext

    A wrapped version of this session in the form of a SQLContext, for backward compatibility.

    A wrapped version of this session in the form of a SQLContext, for backward compatibility.

    Since

    2.0.0

  81. def stop(): Unit

    Synonym for close().

    Synonym for close().

    Definition Classes
    SparkSession
    Since

    2.0.0

  82. def streams: StreamingQueryManager

    Returns a StreamingQueryManager that allows managing all the StreamingQuerys active on this.

    Returns a StreamingQueryManager that allows managing all the StreamingQuerys active on this.

    Annotations
    @Unstable()
    Since

    2.0.0

  83. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  84. def table(tableName: String): DataFrame

    Returns the specified table/view as a DataFrame.

    Returns the specified table/view as a DataFrame. If it's a table, it must support batch reading and the returned DataFrame is the batch scan query plan of this table. If it's a view, the returned DataFrame is simply the query plan of the view, which can either be a batch or streaming query plan.

    tableName

    is either a qualified or unqualified name that designates a table or view. If a database is specified, it identifies the table/view from the database. Otherwise, it first attempts to find a temporary view with the given name and then match the table/view from the current database. Note that, the global temporary view database is also valid here.

    Definition Classes
    SparkSessionSparkSession
  85. def time[T](f: => T): T

    Executes some code block and prints to stdout the time taken to execute the block.

    Executes some code block and prints to stdout the time taken to execute the block. This is available in Scala only and is used primarily for interactive testing and debugging.

    Definition Classes
    SparkSession
    Since

    2.1.0

  86. def toString(): String
    Definition Classes
    AnyRef → Any
  87. def udf: UDFRegistration

    A collection of methods for registering user-defined functions (UDF).

    A collection of methods for registering user-defined functions (UDF).

    The following example registers a Scala closure as UDF:

    sparkSession.udf.register("myUDF", (arg1: Int, arg2: String) => arg2 + arg1)

    The following example registers a UDF in Java:

    sparkSession.udf().register("myUDF",
        (Integer arg1, String arg2) -> arg2 + arg1,
        DataTypes.StringType);
    Definition Classes
    SparkSessionSparkSession
  88. def version: String

    The version of Spark on which this application is running.

    The version of Spark on which this application is running.

    Definition Classes
    SparkSessionSparkSession
  89. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  90. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  91. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  92. def withLogContext(context: HashMap[String, String])(body: => Unit): Unit
    Attributes
    protected
    Definition Classes
    Logging
  93. object implicits extends SQLImplicits with Serializable

    (Scala-specific) Implicit methods available in Scala for converting common Scala objects into DataFrames.

    (Scala-specific) Implicit methods available in Scala for converting common Scala objects into DataFrames.

    val sparkSession = SparkSession.builder.getOrCreate()
    import sparkSession.implicits._
    Since

    2.0.0

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

    (Since version 9)

Inherited from Logging

Inherited from api.SparkSession[Dataset]

Inherited from Closeable

Inherited from AutoCloseable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped