Class SparkSession
- All Implemented Interfaces:
Closeable
,Serializable
,AutoCloseable
,org.apache.spark.internal.Logging
In environments that this has been created upfront (e.g. REPL, notebooks), use the builder to get an existing session:
SparkSession.builder().getOrCreate()
The builder can also be used to create a new session:
SparkSession.builder
.master("local")
.appName("Word Count")
.config("spark.some.config.option", "some-value")
.getOrCreate()
param: sparkContext The Spark context associated with this Spark session. param: existingSharedState If supplied, use the existing shared state instead of creating a new one. param: parentSessionState If supplied, inherit all session state (i.e. temporary views, SQL config, UDFs etc) from parent.
- See Also:
-
Nested Class Summary
Modifier and TypeClassDescriptionstatic class
Builder forSparkSession
.class
class
(Scala-specific) Implicit methods available in Scala for converting common Scala objects intoDataFrame
s.Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
-
Method Summary
Modifier and TypeMethodDescriptionstatic SparkSession
active()
Returns the currently active SparkSession, otherwise the default one.void
addArtifact
(byte[] bytes, String target) Add a single in-memory artifact to the session while preserving the directory structure specified bytarget
under the session's working directory of that particular file extension.void
addArtifact
(String path) Add a single artifact to the current session.void
addArtifact
(String source, String target) Add a single artifact to the session while preserving the directory structure specified bytarget
under the session's working directory of that particular file extension.void
addArtifact
(URI uri) Add a single artifact to the current session.void
addArtifacts
(URI... uri) Add one or more artifacts to the session.void
addArtifacts
(scala.collection.immutable.Seq<URI> uri) Add one or more artifacts to the session.baseRelationToDataFrame
(BaseRelation baseRelation) Convert aBaseRelation
created for external data sources into aDataFrame
.static SparkSession.Builder
builder()
Creates aSparkSession.Builder
for constructing aSparkSession
.catalog()
Interface through which the user may create, drop, alter or query underlying databases, tables, functions etc.static void
Clears the active SparkSession for current thread.static void
Clears the default SparkSession that is returned by the builder.void
close()
Stop the underlyingSparkContext
.conf()
createDataFrame
(List<?> data, Class<?> beanClass) Applies a schema to a List of Java Beans.createDataFrame
(List<Row> rows, StructType schema) createDataFrame
(JavaRDD<?> rdd, Class<?> beanClass) Applies a schema to an RDD of Java Beans.createDataFrame
(JavaRDD<Row> rowRDD, StructType schema) createDataFrame
(RDD<?> rdd, Class<?> beanClass) Applies a schema to an RDD of Java Beans.createDataFrame
(RDD<A> rdd, scala.reflect.api.TypeTags.TypeTag<A> evidence$2) Creates aDataFrame
from an RDD of Product (e.g.createDataFrame
(RDD<Row> rowRDD, StructType schema) createDataFrame
(scala.collection.immutable.Seq<A> data, scala.reflect.api.TypeTags.TypeTag<A> evidence$3) Creates aDataFrame
from a local Seq of Product.<T> Dataset<T>
createDataset
(List<T> data, Encoder<T> evidence$6) Creates aDataset
from ajava.util.List
of a given type.<T> Dataset<T>
createDataset
(RDD<T> data, Encoder<T> evidence$5) Creates aDataset
from an RDD of a given type.<T> Dataset<T>
createDataset
(scala.collection.immutable.Seq<T> data, Encoder<T> evidence$4) Creates aDataset
from a local Seq of data of a given type.A collection of methods for registering user-defined data sources.Returns aDataFrame
with no rows or columns.<T> Dataset<T>
emptyDataset
(Encoder<T> evidence$1) Creates a newDataset
of type T containing zero elements.executeCommand
(String runner, String command, scala.collection.immutable.Map<String, String> options) Execute an arbitrary string command inside an external execution engine rather than Spark.:: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.static scala.Option<SparkSession>
Returns the active SparkSession for the current thread, returned by the builder.static scala.Option<SparkSession>
Returns the default SparkSession that is returned by the builder.Accessor for nested Scala objectAn interface to register customQueryExecutionListener
s that listen for execution metrics.static org.apache.spark.internal.Logging.LogStringContext
LogStringContext
(scala.StringContext sc) Start a new session with isolated SQL configurations, temporary tables, registered functions are isolated, but sharing the underlyingSparkContext
and cached data.static org.slf4j.Logger
static void
org$apache$spark$internal$Logging$$log__$eq
(org.slf4j.Logger x$1) range
(long end) Creates aDataset
with a singleLongType
column namedid
, containing elements in a range from 0 toend
(exclusive) with step value 1.range
(long start, long end) Creates aDataset
with a singleLongType
column namedid
, containing elements in a range fromstart
toend
(exclusive) with step value 1.range
(long start, long end, long step) Creates aDataset
with a singleLongType
column namedid
, containing elements in a range fromstart
toend
(exclusive) with a step value.range
(long start, long end, long step, int numPartitions) Creates aDataset
with a singleLongType
column namedid
, containing elements in a range fromstart
toend
(exclusive) with a step value, with partition number specified.read()
Returns aDataFrameReader
that can be used to read non-streaming data in as aDataFrame
.Returns aDataStreamReader
that can be used to read streaming data in as aDataFrame
.org.apache.spark.sql.internal.SessionState
static void
setActiveSession
(SparkSession session) Changes the SparkSession that will be returned in this thread and its children when SparkSession.getOrCreate() is called.static void
setDefaultSession
(SparkSession session) Sets the default SparkSession that is returned by the builder.org.apache.spark.sql.internal.SharedState
Executes a SQL query using Spark, returning the result as aDataFrame
.Executes a SQL query substituting positional parameters by the given arguments, returning the result as aDataFrame
.Executes a SQL query substituting named parameters by the given arguments, returning the result as aDataFrame
.Executes a SQL query substituting named parameters by the given arguments, returning the result as aDataFrame
.A wrapped version of this session in the form of aSQLContext
, for backward compatibility.streams()
Returns aStreamingQueryManager
that allows managing all theStreamingQuery
s active onthis
.Returns the specified table/view as aDataFrame
.udf()
A collection of methods for registering user-defined functions (UDF).version()
The version of Spark on which this application is running.Methods inherited from class org.apache.spark.sql.api.SparkSession
stop, time
Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext
-
Method Details
-
builder
Creates aSparkSession.Builder
for constructing aSparkSession
.- Returns:
- (undocumented)
- Since:
- 2.0.0
-
setActiveSession
Changes the SparkSession that will be returned in this thread and its children when SparkSession.getOrCreate() is called. This can be used to ensure that a given thread receives a SparkSession with an isolated session, instead of the global (first created) context.- Parameters:
session
- (undocumented)- Since:
- 2.0.0
-
clearActiveSession
public static void clearActiveSession()Clears the active SparkSession for current thread. Subsequent calls to getOrCreate will return the first created context instead of a thread-local override.- Since:
- 2.0.0
-
setDefaultSession
Sets the default SparkSession that is returned by the builder.- Parameters:
session
- (undocumented)- Since:
- 2.0.0
-
clearDefaultSession
public static void clearDefaultSession()Clears the default SparkSession that is returned by the builder.- Since:
- 2.0.0
-
getActiveSession
Returns the active SparkSession for the current thread, returned by the builder.- Returns:
- (undocumented)
- Since:
- 2.2.0
- Note:
- Return None, when calling this function on executors
-
getDefaultSession
Returns the default SparkSession that is returned by the builder.- Returns:
- (undocumented)
- Since:
- 2.2.0
- Note:
- Return None, when calling this function on executors
-
active
Returns the currently active SparkSession, otherwise the default one. If there is no default SparkSession, throws an exception.- Returns:
- (undocumented)
- Since:
- 2.4.0
-
org$apache$spark$internal$Logging$$log_
public static org.slf4j.Logger org$apache$spark$internal$Logging$$log_() -
org$apache$spark$internal$Logging$$log__$eq
public static void org$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1) -
LogStringContext
public static org.apache.spark.internal.Logging.LogStringContext LogStringContext(scala.StringContext sc) -
implicits
Accessor for nested Scala object- Returns:
- (undocumented)
-
addArtifacts
Description copied from class:SparkSession
Add one or more artifacts to the session.Currently it supports local files with extensions .jar and .class and Apache Ivy URIs
- Overrides:
addArtifacts
in classSparkSession<Dataset>
- Parameters:
uri
- (undocumented)- Inheritdoc:
-
sparkContext
-
version
Description copied from class:SparkSession
The version of Spark on which this application is running.- Specified by:
version
in classSparkSession<Dataset>
- Returns:
- (undocumented)
- Inheritdoc:
-
sessionState
public org.apache.spark.sql.internal.SessionState sessionState() -
sqlContext
A wrapped version of this session in the form of aSQLContext
, for backward compatibility.- Returns:
- (undocumented)
- Since:
- 2.0.0
-
conf
-
listenerManager
An interface to register customQueryExecutionListener
s that listen for execution metrics.- Returns:
- (undocumented)
- Since:
- 2.0.0
-
experimental
:: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.- Returns:
- (undocumented)
- Since:
- 2.0.0
-
udf
Description copied from class:SparkSession
A collection of methods for registering user-defined functions (UDF).The following example registers a Scala closure as UDF:
sparkSession.udf.register("myUDF", (arg1: Int, arg2: String) => arg2 + arg1)
The following example registers a UDF in Java:
sparkSession.udf().register("myUDF", (Integer arg1, String arg2) -> arg2 + arg1, DataTypes.StringType);
- Specified by:
udf
in classSparkSession<Dataset>
- Returns:
- (undocumented)
- Inheritdoc:
-
dataSource
A collection of methods for registering user-defined data sources.- Returns:
- (undocumented)
- Since:
- 4.0.0
-
streams
Returns aStreamingQueryManager
that allows managing all theStreamingQuery
s active onthis
.- Returns:
- (undocumented)
- Since:
- 2.0.0
-
newSession
Description copied from class:SparkSession
Start a new session with isolated SQL configurations, temporary tables, registered functions are isolated, but sharing the underlyingSparkContext
and cached data.- Specified by:
newSession
in classSparkSession<Dataset>
- Returns:
- (undocumented)
- Inheritdoc:
-
emptyDataFrame
Description copied from class:SparkSession
Returns aDataFrame
with no rows or columns.- Specified by:
emptyDataFrame
in classSparkSession<Dataset>
- Returns:
- (undocumented)
-
emptyDataset
Description copied from class:SparkSession
Creates a newDataset
of type T containing zero elements.- Specified by:
emptyDataset
in classSparkSession<Dataset>
- Parameters:
evidence$1
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
createDataFrame
public <A extends scala.Product> Dataset<Row> createDataFrame(RDD<A> rdd, scala.reflect.api.TypeTags.TypeTag<A> evidence$2) Creates aDataFrame
from an RDD of Product (e.g. case classes, tuples).- Parameters:
rdd
- (undocumented)evidence$2
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
createDataFrame
public <A extends scala.Product> Dataset<Row> createDataFrame(scala.collection.immutable.Seq<A> data, scala.reflect.api.TypeTags.TypeTag<A> evidence$3) Description copied from class:SparkSession
Creates aDataFrame
from a local Seq of Product.- Specified by:
createDataFrame
in classSparkSession<Dataset>
- Parameters:
data
- (undocumented)evidence$3
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
createDataFrame
:: DeveloperApi :: Creates aDataFrame
from anRDD
containingRow
s using the given schema. It is important to make sure that the structure of everyRow
of the provided RDD matches the provided schema. Otherwise, there will be runtime exception. Example:import org.apache.spark.sql._ import org.apache.spark.sql.types._ val sparkSession = new org.apache.spark.sql.SparkSession(sc) val schema = StructType( StructField("name", StringType, false) :: StructField("age", IntegerType, true) :: Nil) val people = sc.textFile("examples/src/main/resources/people.txt").map( _.split(",")).map(p => Row(p(0), p(1).trim.toInt)) val dataFrame = sparkSession.createDataFrame(people, schema) dataFrame.printSchema // root // |-- name: string (nullable = false) // |-- age: integer (nullable = true) dataFrame.createOrReplaceTempView("people") sparkSession.sql("select name from people").collect.foreach(println)
- Parameters:
rowRDD
- (undocumented)schema
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
createDataFrame
:: DeveloperApi :: Creates aDataFrame
from aJavaRDD
containingRow
s using the given schema. It is important to make sure that the structure of everyRow
of the provided RDD matches the provided schema. Otherwise, there will be runtime exception.- Parameters:
rowRDD
- (undocumented)schema
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
createDataFrame
Description copied from class:SparkSession
:: DeveloperApi :: Creates aDataFrame
from ajava.util.List
containingRow
s using the given schema.It is important to make sure that the structure of everyRow
of the provided List matches the provided schema. Otherwise, there will be runtime exception.- Specified by:
createDataFrame
in classSparkSession<Dataset>
- Parameters:
rows
- (undocumented)schema
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
createDataFrame
Applies a schema to an RDD of Java Beans.WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
- Parameters:
rdd
- (undocumented)beanClass
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
createDataFrame
Applies a schema to an RDD of Java Beans.WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
- Parameters:
rdd
- (undocumented)beanClass
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
createDataFrame
Description copied from class:SparkSession
Applies a schema to a List of Java Beans.WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
- Specified by:
createDataFrame
in classSparkSession<Dataset>
- Parameters:
data
- (undocumented)beanClass
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
baseRelationToDataFrame
Convert aBaseRelation
created for external data sources into aDataFrame
.- Parameters:
baseRelation
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
createDataset
Description copied from class:SparkSession
Creates aDataset
from a local Seq of data of a given type. This method requires an encoder (to convert a JVM object of typeT
to and from the internal Spark SQL representation) that is generally created automatically through implicits from aSparkSession
, or can be created explicitly by calling static methods onEncoders
.==Example==
import spark.implicits._ case class Person(name: String, age: Long) val data = Seq(Person("Michael", 29), Person("Andy", 30), Person("Justin", 19)) val ds = spark.createDataset(data) ds.show() // +-------+---+ // | name|age| // +-------+---+ // |Michael| 29| // | Andy| 30| // | Justin| 19| // +-------+---+
- Specified by:
createDataset
in classSparkSession<Dataset>
- Parameters:
data
- (undocumented)evidence$4
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
createDataset
Creates aDataset
from an RDD of a given type. This method requires an encoder (to convert a JVM object of typeT
to and from the internal Spark SQL representation) that is generally created automatically through implicits from aSparkSession
, or can be created explicitly by calling static methods onEncoders
.- Parameters:
data
- (undocumented)evidence$5
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
createDataset
Description copied from class:SparkSession
Creates aDataset
from ajava.util.List
of a given type. This method requires an encoder (to convert a JVM object of typeT
to and from the internal Spark SQL representation) that is generally created automatically through implicits from aSparkSession
, or can be created explicitly by calling static methods onEncoders
.==Java Example==
List<String> data = Arrays.asList("hello", "world"); Dataset<String> ds = spark.createDataset(data, Encoders.STRING());
- Specified by:
createDataset
in classSparkSession<Dataset>
- Parameters:
data
- (undocumented)evidence$6
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
range
Description copied from class:SparkSession
Creates aDataset
with a singleLongType
column namedid
, containing elements in a range from 0 toend
(exclusive) with step value 1.- Specified by:
range
in classSparkSession<Dataset>
- Parameters:
end
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
range
Description copied from class:SparkSession
Creates aDataset
with a singleLongType
column namedid
, containing elements in a range fromstart
toend
(exclusive) with step value 1.- Specified by:
range
in classSparkSession<Dataset>
- Parameters:
start
- (undocumented)end
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
range
Description copied from class:SparkSession
Creates aDataset
with a singleLongType
column namedid
, containing elements in a range fromstart
toend
(exclusive) with a step value.- Specified by:
range
in classSparkSession<Dataset>
- Parameters:
start
- (undocumented)end
- (undocumented)step
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
range
Description copied from class:SparkSession
Creates aDataset
with a singleLongType
column namedid
, containing elements in a range fromstart
toend
(exclusive) with a step value, with partition number specified.- Specified by:
range
in classSparkSession<Dataset>
- Parameters:
start
- (undocumented)end
- (undocumented)step
- (undocumented)numPartitions
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
catalog
Description copied from class:SparkSession
Interface through which the user may create, drop, alter or query underlying databases, tables, functions etc.- Specified by:
catalog
in classSparkSession<Dataset>
- Returns:
- (undocumented)
-
table
Description copied from class:SparkSession
Returns the specified table/view as aDataFrame
. If it's a table, it must support batch reading and the returned DataFrame is the batch scan query plan of this table. If it's a view, the returned DataFrame is simply the query plan of the view, which can either be a batch or streaming query plan.- Specified by:
table
in classSparkSession<Dataset>
- Parameters:
tableName
- is either a qualified or unqualified name that designates a table or view. If a database is specified, it identifies the table/view from the database. Otherwise, it first attempts to find a temporary view with the given name and then match the table/view from the current database. Note that, the global temporary view database is also valid here.- Returns:
- (undocumented)
- Inheritdoc:
-
sql
Description copied from class:SparkSession
Executes a SQL query substituting positional parameters by the given arguments, returning the result as aDataFrame
. This API eagerly runs DDL/DML commands, but not for SELECT queries.- Specified by:
sql
in classSparkSession<Dataset>
- Parameters:
sqlText
- A SQL statement with positional parameters to execute.args
- An array of Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, 1, "Steven", LocalDate.of(2023, 4, 2). A value can be also aColumn
of a literal or collection constructor functions such asmap()
,array()
,struct()
, in that case it is taken as is.- Returns:
- (undocumented)
- Inheritdoc:
-
sql
Description copied from class:SparkSession
Executes a SQL query substituting named parameters by the given arguments, returning the result as aDataFrame
. This API eagerly runs DDL/DML commands, but not for SELECT queries.- Specified by:
sql
in classSparkSession<Dataset>
- Parameters:
sqlText
- A SQL statement with named parameters to execute.args
- A map of parameter names to Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, map keys: "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value can be also aColumn
of a literal or collection constructor functions such asmap()
,array()
,struct()
, in that case it is taken as is.- Returns:
- (undocumented)
- Inheritdoc:
-
sql
Description copied from class:SparkSession
Executes a SQL query substituting named parameters by the given arguments, returning the result as aDataFrame
. This API eagerly runs DDL/DML commands, but not for SELECT queries.- Overrides:
sql
in classSparkSession<Dataset>
- Parameters:
sqlText
- A SQL statement with named parameters to execute.args
- A map of parameter names to Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, map keys: "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value can be also aColumn
of a literal or collection constructor functions such asmap()
,array()
,struct()
, in that case it is taken as is.- Returns:
- (undocumented)
- Inheritdoc:
-
sql
Description copied from class:SparkSession
Executes a SQL query using Spark, returning the result as aDataFrame
. This API eagerly runs DDL/DML commands, but not for SELECT queries.- Overrides:
sql
in classSparkSession<Dataset>
- Parameters:
sqlText
- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
executeCommand
public Dataset<Row> executeCommand(String runner, String command, scala.collection.immutable.Map<String, String> options) Execute an arbitrary string command inside an external execution engine rather than Spark. This could be useful when user wants to execute some commands out of Spark. For example, executing custom DDL/DML command for JDBC, creating index for ElasticSearch, creating cores for Solr and so on.The command will be eagerly executed after this method is called and the returned DataFrame will contain the output of the command(if any).
- Parameters:
runner
- The class name of the runner that implementsExternalCommandRunner
.command
- The target command to be executedoptions
- The options for the runner.- Returns:
- (undocumented)
- Since:
- 3.0.0
-
addArtifact
Description copied from class:SparkSession
Add a single artifact to the current session.Currently only local files with extensions .jar and .class are supported.
- Specified by:
addArtifact
in classSparkSession<Dataset>
- Parameters:
path
- (undocumented)- Inheritdoc:
-
addArtifact
Description copied from class:SparkSession
Add a single artifact to the current session.Currently it supports local files with extensions .jar and .class and Apache Ivy URIs.
- Specified by:
addArtifact
in classSparkSession<Dataset>
- Parameters:
uri
- (undocumented)- Inheritdoc:
-
addArtifact
Description copied from class:SparkSession
Add a single in-memory artifact to the session while preserving the directory structure specified bytarget
under the session's working directory of that particular file extension.Supported target file extensions are .jar and .class.
==Example==
addArtifact(bytesBar, "foo/bar.class") addArtifact(bytesFlat, "flat.class") // Directory structure of the session's working directory for class files would look like: // ${WORKING_DIR_FOR_CLASS_FILES}/flat.class // ${WORKING_DIR_FOR_CLASS_FILES}/foo/bar.class
- Specified by:
addArtifact
in classSparkSession<Dataset>
- Parameters:
bytes
- (undocumented)target
- (undocumented)- Inheritdoc:
-
addArtifact
Description copied from class:SparkSession
Add a single artifact to the session while preserving the directory structure specified bytarget
under the session's working directory of that particular file extension.Supported target file extensions are .jar and .class.
==Example==
addArtifact("/Users/dummyUser/files/foo/bar.class", "foo/bar.class") addArtifact("/Users/dummyUser/files/flat.class", "flat.class") // Directory structure of the session's working directory for class files would look like: // ${WORKING_DIR_FOR_CLASS_FILES}/flat.class // ${WORKING_DIR_FOR_CLASS_FILES}/foo/bar.class
- Specified by:
addArtifact
in classSparkSession<Dataset>
- Parameters:
source
- (undocumented)target
- (undocumented)- Inheritdoc:
-
addArtifacts
Description copied from class:SparkSession
Add one or more artifacts to the session.Currently it supports local files with extensions .jar and .class and Apache Ivy URIs
- Specified by:
addArtifacts
in classSparkSession<Dataset>
- Parameters:
uri
- (undocumented)- Inheritdoc:
-
read
Description copied from class:SparkSession
Returns aDataFrameReader
that can be used to read non-streaming data in as aDataFrame
.sparkSession.read.parquet("/path/to/file.parquet") sparkSession.read.schema(schema).json("/path/to/file.json")
- Specified by:
read
in classSparkSession<Dataset>
- Returns:
- (undocumented)
- Inheritdoc:
-
readStream
Returns aDataStreamReader
that can be used to read streaming data in as aDataFrame
.sparkSession.readStream.parquet("/path/to/directory/of/parquet/files") sparkSession.readStream.schema(schema).json("/path/to/directory/of/json/files")
- Returns:
- (undocumented)
- Since:
- 2.0.0
-
close
public void close()Stop the underlyingSparkContext
.- Specified by:
close
in interfaceAutoCloseable
- Specified by:
close
in interfaceCloseable
- Since:
- 2.1.0
-